Deep Learning-Based High Throughput Inspection in 3D Nanofabrication and Defect Reversal in Nanopillar Arrays: Implications for Next Generation Transistors

Author(s):  
Utkarsh Anand ◽  
Tanmay Ghosh ◽  
Zainul Aabdin ◽  
Nandi Vrancken ◽  
Hongwei Yan ◽  
...  
2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2019 ◽  
Vol 25 (31) ◽  
pp. 3350-3357 ◽  
Author(s):  
Pooja Tripathi ◽  
Jyotsna Singh ◽  
Jonathan A. Lal ◽  
Vijay Tripathi

Background: With the outbreak of high throughput next-generation sequencing (NGS), the biological research of drug discovery has been directed towards the oncology and infectious disease therapeutic areas, with extensive use in biopharmaceutical development and vaccine production. Method: In this review, an effort was made to address the basic background of NGS technologies, potential applications of NGS in drug designing. Our purpose is also to provide a brief introduction of various Nextgeneration sequencing techniques. Discussions: The high-throughput methods execute Large-scale Unbiased Sequencing (LUS) which comprises of Massively Parallel Sequencing (MPS) or NGS technologies. The Next geneinvolved necessarily executes Largescale Unbiased Sequencing (LUS) which comprises of MPS or NGS technologies. These are related terms that describe a DNA sequencing technology which has revolutionized genomic research. Using NGS, an entire human genome can be sequenced within a single day. Conclusion: Analysis of NGS data unravels important clues in the quest for the treatment of various lifethreatening diseases and other related scientific problems related to human welfare.


2021 ◽  
Vol 118 (12) ◽  
pp. 123701
Author(s):  
Julie Martin-Wortham ◽  
Steffen M. Recktenwald ◽  
Marcelle G. M. Lopes ◽  
Lars Kaestner ◽  
Christian Wagner ◽  
...  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Author(s):  
Xuesheng Bian ◽  
Gang Li ◽  
Cheng Wang ◽  
Weiquan Liu ◽  
Xiuhong Lin ◽  
...  

Processes ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 575
Author(s):  
Jelena Ochs ◽  
Ferdinand Biermann ◽  
Tobias Piotrowski ◽  
Frederik Erkens ◽  
Bastian Nießing ◽  
...  

Laboratory automation is a key driver in biotechnology and an enabler for powerful new technologies and applications. In particular, in the field of personalized therapies, automation in research and production is a prerequisite for achieving cost efficiency and broad availability of tailored treatments. For this reason, we present the StemCellDiscovery, a fully automated robotic laboratory for the cultivation of human mesenchymal stem cells (hMSCs) in small scale and in parallel. While the system can handle different kinds of adherent cells, here, we focus on the cultivation of adipose-derived hMSCs. The StemCellDiscovery provides an in-line visual quality control for automated confluence estimation, which is realized by combining high-speed microscopy with deep learning-based image processing. We demonstrate the feasibility of the algorithm to detect hMSCs in culture at different densities and calculate confluences based on the resulting image. Furthermore, we show that the StemCellDiscovery is capable of expanding adipose-derived hMSCs in a fully automated manner using the confluence estimation algorithm. In order to estimate the system capacity under high-throughput conditions, we modeled the production environment in a simulation software. The simulations of the production process indicate that the robotic laboratory is capable of handling more than 95 cell culture plates per day.


Sign in / Sign up

Export Citation Format

Share Document