scholarly journals LLAMA: a robust and scalable machine learning pipeline for analysis of large scale 4D microscopy data: analysis of cell ruffles and filopodia

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
James G. Lefevre ◽  
Yvette W. H. Koh ◽  
Adam A. Wall ◽  
Nicholas D. Condon ◽  
Jennifer L. Stow ◽  
...  

Abstract Background With recent advances in microscopy, recordings of cell behaviour can result in terabyte-size datasets. The lattice light sheet microscope (LLSM) images cells at high speed and high 3D resolution, accumulating data at 100 frames/second over hours, presenting a major challenge for interrogating these datasets. The surfaces of vertebrate cells can rapidly deform to create projections that interact with the microenvironment. Such surface projections include spike-like filopodia and wave-like ruffles on the surface of macrophages as they engage in immune surveillance. LLSM imaging has provided new insights into the complex surface behaviours of immune cells, including revealing new types of ruffles. However, full use of these data requires systematic and quantitative analysis of thousands of projections over hundreds of time steps, and an effective system for analysis of individual structures at this scale requires efficient and robust methods with minimal user intervention. Results We present LLAMA, a platform to enable systematic analysis of terabyte-scale 4D microscopy datasets. We use a machine learning method for semantic segmentation, followed by a robust and configurable object separation and tracking algorithm, generating detailed object level statistics. Our system is designed to run on high-performance computing to achieve high throughput, with outputs suitable for visualisation and statistical analysis. Advanced visualisation is a key element of LLAMA: we provide a specialised tool which supports interactive quality control, optimisation, and output visualisation processes to complement the processing pipeline. LLAMA is demonstrated in an analysis of macrophage surface projections, in which it is used to i) discriminate ruffles induced by lipopolysaccharide (LPS) and macrophage colony stimulating factor (CSF-1) and ii) determine the autonomy of ruffle morphologies. Conclusions LLAMA provides an effective open source tool for running a cell microscopy analysis pipeline based on semantic segmentation, object analysis and tracking. Detailed numerical and visual outputs enable effective statistical analysis, identifying distinct patterns of increased activity under the two interventions considered in our example analysis. Our system provides the capacity to screen large datasets for specific structural configurations. LLAMA identified distinct features of LPS and CSF-1 induced ruffles and it identified a continuity of behaviour between tent pole ruffling, wave-like ruffling and filopodia deployment.

2020 ◽  
Author(s):  
James G. Lefevre ◽  
Yvette W. H. Koh ◽  
Adam A. Wall ◽  
Nicholas D. Condon ◽  
Jennifer L. Stow ◽  
...  

abstractWe present LLAMA, a pipeline for systematic analysis of terabyte scale 4D microscopy datasets. Analysis of individual biological structures in imaging at this scale requires efficient and robust methods which do not require human micromanagement or editing of outputs. To meet this challenge, we use a machine learning method for semantic segmentation, followed by a robust and configurable object separation and tracking algorithm, and the generation of detailed object level statistics. Advanced visualisation is a key element of LLAMA: we provide a specialised software tool which supports quality control and optimisation as well as visualisation of outputs. LLAMA was used in a quantitative analysis of macrophage surface membrane projections (filopodia, ruffles, tent-pole ruffles) examining the differential effects of two interventions: lipopolysaccharide (LPS) and macrophage colony stimulating factor (CSF-1). Distinct patterns of increased activity were identified. In addition, a continuity of behaviour was found between tent pole ruffling and wave-like ruffling, further defining the role of filopodia in ruffling.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1365
Author(s):  
Tao Zheng ◽  
Zhizhao Duan ◽  
Jin Wang ◽  
Guodong Lu ◽  
Shengjie Li ◽  
...  

Semantic segmentation of room maps is an essential issue in mobile robots’ execution of tasks. In this work, a new approach to obtain the semantic labels of 2D lidar room maps by combining distance transform watershed-based pre-segmentation and a skillfully designed neural network lidar information sampling classification is proposed. In order to label the room maps with high efficiency, high precision and high speed, we have designed a low-power and high-performance method, which can be deployed on low computing power Raspberry Pi devices. In the training stage, a lidar is simulated to collect the lidar detection line maps of each point in the manually labelled map, and then we use these line maps and the corresponding labels to train the designed neural network. In the testing stage, the new map is first pre-segmented into simple cells with the distance transformation watershed method, then we classify the lidar detection line maps with the trained neural network. The optimized areas of sparse sampling points are proposed by using the result of distance transform generated in the pre-segmentation process to prevent the sampling points selected in the boundary regions from influencing the results of semantic labeling. A prototype mobile robot was developed to verify the proposed method, the feasibility, validity, robustness and high efficiency were verified by a series of tests. The proposed method achieved higher scores in its recall, precision. Specifically, the mean recall is 0.965, and mean precision is 0.943.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Jiyun Heo ◽  
Jae-Yun Han ◽  
Soohyun Kim ◽  
Seongmin Yuk ◽  
Chanyong Choi ◽  
...  

Abstract The vanadium redox flow battery is considered one of the most promising candidates for use in large-scale energy storage systems. However, its commercialization has been hindered due to the high manufacturing cost of the vanadium electrolyte, which is currently prepared using a costly electrolysis method with limited productivity. In this work, we present a simpler method for chemical production of impurity-free V3.5+ electrolyte by utilizing formic acid as a reducing agent and Pt/C as a catalyst. With the catalytic reduction of V4+ electrolyte, a high quality V3.5+ electrolyte was successfully produced and excellent cell performance was achieved. Based on the result, a prototype catalytic reactor employing Pt/C-decorated carbon felt was designed, and high-speed, continuous production of V3.5+ electrolyte in this manner was demonstrated with the reactor. This invention offers a simple but practical strategy to reduce the production cost of V3.5+ electrolyte while retaining quality that is adequate for high-performance operations.


2011 ◽  
Vol 105-107 ◽  
pp. 2217-2220
Author(s):  
Mu Lan Wang ◽  
Jian Min Zuo ◽  
Kun Liu ◽  
Xing Hua Zhu

In order to meet the development demands for high-speed and high-precision of Computer Numerical Control (CNC) machine tools, the equipped CNC systems begin to employ the technical route of software hardening. Making full use of the advanced performance of Large Scale Integrated Circuits (LSIC), this paper puts forward using Field Programmable Gates Array (FPGA) for the functional modules of CNC system, which is called Intelligent Software Hardening Chip (ISHC). The CNC system architecture with high performance is constructed based on the open system thought and ISHCs. The corresponding programs can be designed with Very high speed integrate circuit Hardware Description Language (VHDL) and downloaded into the FPGA. These hardening modules, including the arithmetic module, contour interpolation module, position control module and so on, demonstrate that the proposed schemes are reasonable and feasibility.


Author(s):  
Vinay Sriram ◽  
David Kearney

High speed infrared (IR) scene simulation is used extensively in defense and homeland security to test sensitivity of IR cameras and accuracy of IR threat detection and tracking algorithms used commonly in IR missile approach warning systems (MAWS). A typical MAWS requires an input scene rate of over 100 scenes/second. Infrared scene simulations typically take 32 minutes to simulate a single IR scene that accounts for effects of atmospheric turbulence, refraction, optical blurring and charge-coupled device (CCD) camera electronic noise on a Pentium 4 (2.8GHz) dual core processor [7]. Thus, in IR scene simulation, the processing power of modern computers is a limiting factor. In this paper we report our research to accelerate IR scene simulation using high performance reconfigurable computing. We constructed a multi Field Programmable Gate Array (FPGA) hardware acceleration platform and accelerated a key computationally intensive IR algorithm over the hardware acceleration platform. We were successful in reducing the computation time of IR scene simulation by over 36%. This research acts as a unique case study for accelerating large scale defense simulations using a high performance multi-FPGA reconfigurable computer.


2021 ◽  
Author(s):  
Lin Huang ◽  
Kun Qian

Abstract Early cancer detection greatly increases the chances for successful treatment, but available diagnostics for some tumours, including lung adenocarcinoma (LA), are limited. An ideal early-stage diagnosis of LA for large-scale clinical use must address quick detection, low invasiveness, and high performance. Here, we conduct machine learning of serum metabolic patterns to detect early-stage LA. We extract direct metabolic patterns by the optimized ferric particle-assisted laser desorption/ionization mass spectrometry within 1 second using only 50 nL of serum. We define a metabolic range of 100-400 Da with 143 m/z features. We diagnose early-stage LA with sensitivity~70-90% and specificity~90-93% through the sparse regression machine learning of patterns. We identify a biomarker panel of seven metabolites and relevant pathways to distinguish early-stage LA from controls (p < 0.05). Our approach advances the design of metabolic analysis for early cancer detection and holds promise as an efficient test for low-cost rollout to clinics.


Sign in / Sign up

Export Citation Format

Share Document