scholarly journals Virtual experimentations by deep learning on tangible materials

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Takashi Honda ◽  
Shun Muroga ◽  
Hideaki Nakajima ◽  
Taiyo Shimizu ◽  
Kazufumi Kobashi ◽  
...  

AbstractArtificial intelligence relying on structure-property databases is an emerging powerful tool to discover new materials with targeted properties. However, this approach cannot be easily applied to tangible structures, such as plastic composites and fabrics, because of their high structural complexity. Here, we propose a deep learning computational framework that can implement virtual experiments on tangible structures. Structural representations of complex carbon nanotube films were conducted by multiple generative adversarial networks of scanning electron microscope images at four levels of magnifications, enabling a deep learning prediction of multiple properties such as electrical conductivity and surface area. 1716 virtual experiments were completed within an hour, a task that would take years for real experiments. The data can be used as a versatile database for material science, in analogy to databases of molecules and solids used in cheminformatics. Useful examples are the investigation of correlations between electrical conductivity, specific surface area, wall number phase diagrams, economic performance, and inversely designed supercapacitors.

2021 ◽  
Author(s):  
Kenji Hata ◽  
Takashi Honda ◽  
Shun Muroga ◽  
Hideaki Nakajima ◽  
Taiyo Shimizu ◽  
...  

Abstract Artificial intelligence is an emerging frontier in material science to discover new materials with targeted properties by an artificial neural network (ANN) constructed from existing structure-property databases. This approach has not been applicable to tangible materials, such as plastic composites, fabrics, and rubbers, because the complexities of their structures cannot be defined. Here we propose a deep learning computational framework that can implement “virtual” experiments on tangible materials (carbon nanotube (CNT) films) where structural representations (scanning electron microscope images at 4 levels of magnifications (x2k, x20k, x50k, x100k)) of the processed material (dispersing and filtering) were created by multiple generative adversarial networks from which an ANN predicted multiple properties (electrical conductivity and specific surface area). 1865 virtual experiments were finished within an hour, a task that would take years for real experiments. The accumulated data can be used as a versatile database for material science, in analogous to databases of molecules and solids used in cheminformatics, as exemplified by investigations of the correlation between the electrical conductivity and specific surface area, wall number phase diagrams, the most economical mixture of CNTs at specific property, and inversely designed CNT supercapacitors.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Karim Armanious ◽  
Tobias Hepp ◽  
Thomas Küstner ◽  
Helmut Dittmann ◽  
Konstantin Nikolaou ◽  
...  

2021 ◽  
Author(s):  
Van Bettauer ◽  
Anna CBP Costa ◽  
Raha Parvizi Omran ◽  
Samira Massahi ◽  
Eftyhios Kirbizakis ◽  
...  

We present deep learning-based approaches for exploring the complex array of morphologies exhibited by the opportunistic human pathogen C. albicans. Our system entitled Candescence automatically detects C. albicans cells from Differential Image Contrast microscopy, and labels each detected cell with one of nine vegetative, mating-competent or filamentous morphologies. The software is based upon a fully convolutional one-stage object detector and exploits a novel cumulative curriculum-based learning strategy that stratifies our images by difficulty from simple vegetative forms to more complex filamentous architectures. Candescence achieves very good performance on this difficult learning set which has substantial intermixing between the predicted classes. To capture the essence of each C. albicans morphology, we develop models using generative adversarial networks and identify subcomponents of the latent space which control technical variables, developmental trajectories or morphological switches. We envision Candescence as a community meeting point for quantitative explorations of C. albicans morphology.


Author(s):  
Priyanka Nandal

This work represents a simple method for motion transfer (i.e., given a source video of a subject [person] performing some movements or in motion, that movement/motion is transferred to amateur target in different motion). The pose is used as an intermediate representation to perform this translation. To transfer the motion of the source subject to the target subject, the pose is extracted from the source subject, and then the target subject is generated by applying the learned pose to-appearance mapping. To perform this translation, the video is considered as a set of images consisting of all the frames. Generative adversarial networks (GANs) are used to transfer the motion from source subject to the target subject. GANs are an evolving field of deep learning.


2020 ◽  
Vol 10 (2) ◽  
pp. 82-105
Author(s):  
Yadigar N. Imamverdiyev ◽  
Fargana J. Abdullayeva

In this article, a review and summarization of the emerging scientific approaches of deep learning (DL) on cybersecurity are provided, a structured and comprehensive overview of the various cyberattack detection methods is conducted, existing cyberattack detection methods based on DL is categorized. Methods covering attacks to deep learning based on generative adversarial networks (GAN) are investigated. The datasets used for the evaluation of the efficiency proposed by researchers for cyberattack detection methods are discussed. The statistical analysis of papers published on cybersecurity with the application of DL over the years is conducted. Existing commercial cybersecurity solutions developed on deep learning are described.


PLoS ONE ◽  
2020 ◽  
Vol 15 (3) ◽  
pp. e0229951 ◽  
Author(s):  
Atsushi Teramoto ◽  
Tetsuya Tsukamoto ◽  
Ayumi Yamada ◽  
Yuka Kiriyama ◽  
Kazuyoshi Imaizumi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document