scholarly journals DRPnet: automated particle picking in cryo-electron micrographs using deep regression

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Nguyen Phuoc Nguyen ◽  
Ilker Ersoy ◽  
Jacob Gotberg ◽  
Filiz Bunyak ◽  
Tommi A. White

Abstract Background Identification and selection of protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based particle picking network to automatically detect particle centers from cryoEM micrographs. This is a challenging task due to the nature of cryoEM data, having low signal-to-noise ratios with variable particle sizes, shapes, distributions, grayscale variations as well as other undesirable artifacts. Results We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different particle sizes, shapes, distributions and grayscale patterns corresponding to 2D views of 3D particles. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined to reduce false particle detections by the second classification CNN. DRPnet’s first CNN pretrained with only a single cryoEM dataset can be used to detect particles from different datasets without retraining. Compared to RELION template-based autopicking, DRPnet results in better particle picking performance with drastically reduced user interactions and processing time. DRPnet also outperforms the state-of-the-art particle picking networks in terms of the supervised detection evaluation metrics recall, precision, and F-measure. To further highlight quality of the picked particle sets, we compute and present additional performance metrics assessing the resulting 3D reconstructions such as number of 2D class averages, efficiency/angular coverage, Rosenthal-Henderson plots and local/global 3D reconstruction resolution. Conclusion DRPnet shows greatly improved time-savings to generate an initial particle dataset compared to manual picking, followed by template-based autopicking. Compared to other networks, DRPnet has equivalent or better performance. DRPnet excels on cryoEM datasets that have low contrast or clumped particles. Evaluating other performance metrics, DRPnet is useful for higher resolution 3D reconstructions with decreased particle numbers or unknown symmetry, detecting particles with better angular orientation coverage.

2019 ◽  
Author(s):  
Nguyen P. Nguyen ◽  
Jacob Gotberg ◽  
Ilker Ersoy ◽  
Filiz Bunyak ◽  
Tommi White

AbstractSelection of individual protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based method to automatically detect particle centers from cryoEM micrographs. This is a challenging task because of the low signal-to-noise ratio of cryoEM micrographs and the size, shape, and grayscale-level variations in particles. We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined (or classified) to reduce false particle detections by the second CNN. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different grayscale patterns corresponding to 2D views of 3D particles. Our experiments showed that DRPnet’s first CNN pretrained with one dataset can be used to detect particles from a different datasets without retraining. The performance of this network can be further improved by re-training the network using specific particle datasets. The second network, a classification convolutional neural network, is used to refine detection results by identifying false detections. The proposed fully automated “deep regression” system, DRPnet, pretrained with TRPV1 (EMPIAR-10005) [1], and tested on β-galactosidase (EMPIAR-10017) [2] and β-galactosidase (EMPIAR-10061) [3], was then compared to RELION’s interactive particle picking. Preliminary experiments resulted in comparable or better particle picking performance with drastically reduced user interactions and improved processing time.


Medical image processing is a challenging research field, since most captured images suffer from noise and poor contrast nature. The accuracy of details present in the medical image depends entirely on the captured image quality. The factor that affects the quality of the images includes poor illumination conditions, capturing devices and inexperienced technicians that may result in low contrast images. Hence, contrast enhancement techniques are necessary to improve the quality of OCT images for further processing. In this paper, the enhancement of OCT images is carried out using various enhancement techniques to identify the method that offers improvement in the enhancement quality of the image. It presents a comparative evaluation of enhancement techniques based on the performance indices calculated from the experimental results. The results of this research work suggest the better enhancement technique suitable for OCT images depending on the various performance metrics used prominently in medical imaging


Author(s):  
Russell L. Steere ◽  
Eric F. Erbe ◽  
J. Michael Moseley

We have designed and built an electronic device which compares the resistance of a defined area of vacuum evaporated material with a variable resistor. When the two resistances are matched, the device automatically disconnects the primary side of the substrate transformer and stops further evaporation.This approach to controlled evaporation in conjunction with the modified guns and evaporation source permits reliably reproducible multiple Pt shadow films from a single Pt wrapped carbon point source. The reproducibility from consecutive C point sources is also reliable. Furthermore, the device we have developed permits us to select a predetermined resistance so that low contrast high-resolution shadows, heavy high contrast shadows, or any grade in between can be selected at will. The reproducibility and quality of results are demonstrated in Figures 1-4 which represent evaporations at various settings of the variable resistor.


2017 ◽  
Vol 1 (3) ◽  
pp. 54
Author(s):  
BOUKELLOUZ Wafa ◽  
MOUSSAOUI Abdelouahab

Background: Since the last decades, research have been oriented towards an MRI-alone radiation treatment planning (RTP), where MRI is used as the primary modality for imaging, delineation and dose calculation by assigning to it the needed electron density (ED) information. The idea is to create a computed tomography (CT) image or so-called pseudo-CT from MRI data. In this paper, we review and classify methods for creating pseudo-CT images from MRI data. Each class of methods is explained and a group of works in the literature is presented in detail with statistical performance. We discuss the advantages, drawbacks and limitations of each class of methods. Methods: We classified most recent works in deriving a pseudo-CT from MR images into four classes: segmentation-based, intensity-based, atlas-based and hybrid methods. We based the classification on the general technique applied in the approach. Results: Most of research focused on the brain and the pelvis regions. The mean absolute error (MAE) ranged from 80 HU to 137 HU and from 36.4 HU to 74 HU for the brain and pelvis, respectively. In addition, an interest in the Dixon MR sequence is increasing since it has the advantage of producing multiple contrast images with a single acquisition. Conclusion: Radiation therapy field is emerging towards the generalization of MRI-only RT thanks to the advances in techniques for generation of pseudo-CT images. However, a benchmark is needed to set in common performance metrics to assess the quality of the generated pseudo-CT and judge on the efficiency of a certain method.


Author(s):  
Snehashis Pal ◽  
Nenad Gubeljak ◽  
Tonica Bončina ◽  
Radovan Hudák ◽  
Teodor Toth ◽  
...  

AbstractIn this study, the effect of powder spreading direction was investigated on selectively laser-melted specimens. The results showed that the metallurgical properties of the specimens varied during fabrication with respect to their position on the build tray. The density, porosity, and tensile properties of the Co–Cr–W–Mo alloy were investigated on cuboid and tensile specimens fabricated at different locations. Two different significant positions on the tray were selected along the powder spreading direction. One set of specimens was located near the start line of powder spreading, and the other set was located near the end of the building tray. The main role in the consequences of powder layering was played by the distribution of powder particle sizes and the packing density of the layers. As a result, laser penetration, melt pool formation, and fusion characteristics varied. To confirm the occurrence of variations in sample density, an additional experiment was performed with a Ti–6Al–4V alloy. Furthermore, the powders were collected at two different fabricating locations and their size distribution for both materials was investigated.


2018 ◽  
Vol 7 (2.26) ◽  
pp. 25
Author(s):  
E Ramya ◽  
R Gobinath

Data mining plays an important role in analysis of data in modern sensor networks. A sensor network is greatly constrained by the various challenges facing a modern Wireless Sensor Network. This survey paper focuses on basic idea about the algorithms and measurements taken by the Researchers in the area of Wireless Sensor Network with Health Care. This survey also catego-ries various constraints in Wireless Body Area Sensor Networks data and finds the best suitable techniques for analysing the Sensor Data. Due to resource constraints and dynamic topology, the quality of service is facing a challenging issue in Wireless Sensor Networks. In this paper, we review the quality of service parameters with respect to protocols, algorithms and Simulations. 


Author(s):  
Anna Ferrante ◽  
James Boyd ◽  
Sean Randall ◽  
Adrian Brown ◽  
James Semmens

ABSTRACT ObjectivesRecord linkage is a powerful technique which transforms discrete episode data into longitudinal person-based records. These records enable the construction and analysis of complex pathways of health and disease progression, and service use. Achieving high linkage quality is essential for ensuring the quality and integrity of research based on linked data. The methods used to assess linkage quality will depend on the volume and characteristics of the datasets involved, the processes used for linkage and the additional information available for quality assessment. This paper proposes and evaluates two methods to routinely assess linkage quality. ApproachLinkage units currently use a range of methods to measure, monitor and improve linkage quality; however, no common approach or standards exist. There is an urgent need to develop “best practices” in evaluating, reporting and benchmarking linkage quality. In assessing linkage quality, of primary interest is in knowing the number of true matches and non-matches identified as links and non-links. Any misclassification of matches within these groups introduces linkage errors. We present efforts to develop sharable methods to measure linkage quality in Australia. This includes a sampling-based method to estimate both precision (accuracy) and recall (sensitivity) following record linkage and a benchmarking method - a transparent and transportable methodology to benchmark the quality of linkages across different operational environments. ResultsThe sampling-based method achieved estimates of linkage quality that were very close to actual linkage quality metrics. This method presents as a feasible means of accurately estimating matching quality and refining linkages in population level linkage studies. The benchmarking method provides a systematic approach to estimating linkage quality with a set of open and shareable datasets and a set of well-defined, established performance metrics. The method provides an opportunity to benchmark the linkage quality of different record linkage operations. Both methods have the potential to assess the inter-rater reliability of clerical reviews. ConclusionsBoth methods produce reliable estimates of linkage quality enabling the exchange of information within and between linkage communities. It is important that researchers can assess risk in studies using record linkage techniques. Understanding the impact of linkage quality on research outputs highlights a need for standard methods to routinely measure linkage quality. These two methods provide a good start to the quality process, but it is important to identify standards and good practices in all parts of the linkage process (pre-processing, standardising activities, linkage, grouping and extracting).


Author(s):  
M. A. Taymarov ◽  
R. V. Akhmetova ◽  
S. M. Margulis ◽  
L. I. Kasimova

The difficulties of burning the watered fuel oil used at the TPP as a reserve fuel for boilers are associated with its preparation by heating to reduce viscosity and the choice of a method of spraying with nozzles into the combustion zone. The quality of the preparation of fuel oil for combustion affecting the boiler efficiency is estimated by the length of the flame, the presence of burning large particles of fuel oil, the injection of coke and unburned particles onto screen and other heat-receiving surfaces. One of the ways to prepare fuel oil for combustion is cavitation treatment, which results in an emulsion consisting of fine micronsized particles. Heating of fuel oil particles after the nozzle in contact with the combustion zone is due to the flow of radiation from the burning torch. Therefore, in this article, the values of the flux density from the torch during the combustion of fuel oil are experimentally determined. The influence of particle size on the burning rate of the fuel oil M100 with the different density of the thermal radiation of the flame. It is found that the effect of cavitation treatment of fuel oil on the combustion rate is most significantly manifested in particle sizes less than 10 microns. For this purpose, the use of hydrodynamic cavitators are preferred at high fuel oil consumption rate.


Sign in / Sign up

Export Citation Format

Share Document