scholarly journals Prediction and Optimization of Ion Transport Characteristics in Nanoparticle-Based Electrolytes Using Convolutional Neural Networks

Author(s):  
Sanket Kadulkar ◽  
Michael Howard ◽  
Thomas Truskett ◽  
Venkat Ganesan

<pre>We develop a convolutional neural network (CNN) model to predict the diffusivity of cations in nanoparticle-based electrolytes, and use it to identify the characteristics of morphologies which exhibit optimal transport properties. The ground truth data is obtained from kinetic Monte Carlo (kMC) simulations of cation transport parameterized using a multiscale modeling strategy. We implement deep learning approaches to quantitatively link the diffusivity of cations to the spatial arrangement of the nanoparticles. We then integrate the trained CNN model with a topology optimization algorithm for accelerated discovery of nanoparticle morphologies that exhibit optimal cation diffusivities at a specified nanoparticle loading, and we investigate the ability of the CNN model to quantitatively account for the influence of interparticle spatial correlations on cation diffusivity. Finally, using data-driven approaches, we explore how simple descriptors of nanoparticle morphology correlate with cation diffusivity, thus providing a physical rationale for the observed optimal microstructures. The results of this study highlight the capability of CNNs to serve as surrogate models for structure--property relationships in composites with monodisperse spherical particles, which can in turn be used with inverse methods to discover morphologies that produce optimal target properties.</pre>

2021 ◽  
Author(s):  
Sanket Kadulkar ◽  
Michael Howard ◽  
Thomas Truskett ◽  
Venkat Ganesan

<pre>We develop a convolutional neural network (CNN) model to predict the diffusivity of cations in nanoparticle-based electrolytes, and use it to identify the characteristics of morphologies which exhibit optimal transport properties. The ground truth data is obtained from kinetic Monte Carlo (kMC) simulations of cation transport parameterized using a multiscale modeling strategy. We implement deep learning approaches to quantitatively link the diffusivity of cations to the spatial arrangement of the nanoparticles. We then integrate the trained CNN model with a topology optimization algorithm for accelerated discovery of nanoparticle morphologies that exhibit optimal cation diffusivities at a specified nanoparticle loading, and we investigate the ability of the CNN model to quantitatively account for the influence of interparticle spatial correlations on cation diffusivity. Finally, using data-driven approaches, we explore how simple descriptors of nanoparticle morphology correlate with cation diffusivity, thus providing a physical rationale for the observed optimal microstructures. The results of this study highlight the capability of CNNs to serve as surrogate models for structure--property relationships in composites with monodisperse spherical particles, which can in turn be used with inverse methods to discover morphologies that produce optimal target properties.</pre>


10.29007/3lks ◽  
2019 ◽  
Author(s):  
Axel Tanner ◽  
Martin Strohmeier

Anomalies in the airspace can provide an indicator of critical events and changes which go beyond aviation. Devising techniques, which can detect abnormal patterns can provide intelligence and information ranging from weather to political events. This work presents our latest findings in detecting such anomalies in air traffic patterns using ADS-B data provided by the OpenSky network [8]. After discussion of specific problems in anomaly detection in air traffic data, we show an experiment in a regional setting, evaluating air traffic densities with the Gini index, and a second experiment investigating the runway use at Zurich airport. In the latter case, strong available ground truth data allows to better understand and confirm findings of different learning approaches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura K. Young ◽  
Hannah E. Smithson

AbstractHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.


2016 ◽  
Author(s):  
Ryan Poplin ◽  
Pi-Chuan Chang ◽  
David Alexander ◽  
Scott Schwartz ◽  
Thomas Colthurst ◽  
...  

AbstractNext-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual’s genome1 by calling genetic variants present in an individual using billions of short, errorful sequence reads2. Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome3,4. Here we show that a deep convolutional neural network5 can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the “highest performance” award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other mammalian species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data.


2021 ◽  
Vol 11 (20) ◽  
pp. 9724
Author(s):  
Junuk Cha ◽  
Muhammad Saqlain ◽  
Changhwa Lee ◽  
Seongyeong Lee ◽  
Seungeun Lee ◽  
...  

Three-dimensional human pose and shape estimation is an important problem in the computer vision community, with numerous applications such as augmented reality, virtual reality, human computer interaction, and so on. However, training accurate 3D human pose and shape estimators based on deep learning approaches requires a large number of images and corresponding 3D ground-truth pose pairs, which are costly to collect. To relieve this constraint, various types of weakly or self-supervised pose estimation approaches have been proposed. Nevertheless, these methods still involve supervision signals, which require effort to collect, such as unpaired large-scale 3D ground truth data, a small subset of 3D labeled data, video priors, and so on. Often, they require installing equipment such as a calibrated multi-camera system to acquire strong multi-view priors. In this paper, we propose a self-supervised learning framework for 3D human pose and shape estimation that does not require other forms of supervision signals while using only single 2D images. Our framework inputs single 2D images, estimates human 3D meshes in the intermediate layers, and is trained to solve four types of self-supervision tasks (i.e., three image manipulation tasks and one neural rendering task) whose ground-truths are all based on the single 2D images themselves. Through experiments, we demonstrate the effectiveness of our approach on 3D human pose benchmark datasets (i.e., Human3.6M, 3DPW, and LSP), where we present the new state-of-the-art among weakly/self-supervised methods.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1361
Author(s):  
Ajaykumar Unagar ◽  
Yuan Tian ◽  
Manuel Arias Chao ◽  
Olga Fink

Lithium-ion (Li-I) batteries have recently become pervasive and are used in many physical assets. For the effective management of the batteries, reliable predictions of the end-of-discharge (EOD) and end-of-life (EOL) are essential. Many detailed electrochemical models have been developed for the batteries. Their parameters are calibrated before they are taken into operation and are typically not re-calibrated during operation. However, the degradation of batteries increases the reality gap between the computational models and the physical systems and leads to inaccurate predictions of EOD/EOL. The current calibration approaches are either computationally expensive (model-based calibration) or require large amounts of ground truth data for degradation parameters (supervised data-driven calibration). This is often infeasible for many practical applications. In this paper, we introduce a reinforcement learning-based framework for reliably inferring calibration parameters of battery models in real time. Most importantly, the proposed methodology does not need any labeled data samples of observations and the ground truth parameters. The experimental results demonstrate that our framework is capable of inferring the model parameters in real time with better accuracy compared to approaches based on unscented Kalman filters. Furthermore, our results show better generalizability than supervised learning approaches even though our methodology does not rely on ground truth information during training.


Author(s):  
Helen Spiers ◽  
Harry Songhurst ◽  
Luke Nightingale ◽  
Joost de Folter ◽  
Roger Hutchings ◽  
...  

AbstractAdvancements in volume electron microscopy mean it is now possible to generate thousands of serial images at nanometre resolution overnight, yet the gold standard approach for data analysis remains manual segmentation by an expert microscopist, resulting in a critical research bottleneck. Although some machine learning approaches exist in this domain, we remain far from realising the aspiration of a highly accurate, yet generic, automated analysis approach, with a major obstacle being lack of sufficient high-quality ground-truth data. To address this, we developed a novel citizen science project, Etch a Cell, to enable volunteers to manually segment the nuclear envelope of HeLa cells imaged with Serial Blockface SEM. We present our approach for aggregating multiple volunteer annotations to generate a high quality consensus segmentation, and demonstrate that data produced exclusively by volunteers can be used to train a highly accurate machine learning algorithm for automatic segmentation of the nuclear envelope, which we share here, in addition to our archived benchmark data.


Energies ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 75
Author(s):  
Benjamin Völker ◽  
Marc Pfeifer ◽  
Philipp M. Scholl ◽  
Bernd Becker

In order to reduce the electricity consumption in our homes, a first step is to make the user aware of it. Raising such awareness, however, demands to pinpoint users of specific appliances that unnecessarily consume electricity. A retrofittable and scalable way to provide appliance-specific consumption is provided by Non-Intrusive Load Monitoring methods. These methods use a single electricity meter to record the aggregated consumption of all appliances and disaggregate it into the consumption of each individual appliance using advanced algorithms usually utilizing machine-learning approaches. Since these approaches are often supervised, labelled ground-truth data need to be collected in advance. Labeling on-phases of devices is already a tedious process, but, if further information about internal device states is required (e.g., intensity of an HVAC), manual post-processing quickly becomes infeasible. We propose a novel data collection and labeling framework for Non-Intrusive Load Monitoring. The framework is comprised of the hardware and software required to record and (semi-automatically) label the data. The hardware setup includes a smart-meter device to record aggregated consumption data and multiple socket meters to record appliance level data. Labeling is performed in a semi-automatic post-processing step guided by a graphical user interface, which reduced the labeling effort by 72% compared to a manual approach. We evaluated our framework and present the FIRED dataset. The dataset features uninterrupted, time synced aggregated, and individual device voltage and current waveforms with distinct state transition labels for a total of 101 days.


2021 ◽  
Author(s):  
Laura K Young ◽  
Hannah E Smithson

ABSTRACTHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.


2021 ◽  
Vol 13 (10) ◽  
pp. 1966
Author(s):  
Christopher W Smith ◽  
Santosh K Panda ◽  
Uma S Bhatt ◽  
Franz J Meyer ◽  
Anushree Badola ◽  
...  

In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent.


Sign in / Sign up

Export Citation Format

Share Document