scholarly journals An Affordable Image-Analysis Platform to Accelerate Stomatal Phenotyping During Microscopic Observation

2021 ◽  
Vol 12 ◽  
Author(s):  
Yosuke Toda ◽  
Toshiaki Tameshige ◽  
Masakazu Tomiyama ◽  
Toshinori Kinoshita ◽  
Kentaro K. Shimizu

Recent technical advances in the computer-vision domain have facilitated the development of various methods for achieving image-based quantification of stomata-related traits. However, the installation cost of such a system and the difficulties of operating it on-site have been hurdles for experimental biologists. Here, we present a platform that allows real-time stomata detection during microscopic observation. The proposed system consists of a deep neural network model-based stomata detector and an upright microscope connected to a USB camera and a graphics processing unit (GPU)-supported single-board computer. All the hardware components are commercially available at common electronic commerce stores at a reasonable price. Moreover, the machine-learning model is prepared based on freely available cloud services. This approach allows users to set up a phenotyping platform at low cost. As a proof of concept, we trained our model to detect dumbbell-shaped stomata from wheat leaf imprints. Using this platform, we collected a comprehensive range of stomatal phenotypes from wheat leaves. We confirmed notable differences in stomatal density (SD) between adaxial and abaxial surfaces and in stomatal size (SS) between wheat-related species of different ploidy. Utilizing such a platform is expected to accelerate research that involves all aspects of stomata phenotyping.

2021 ◽  
Author(s):  
Airidas Korolkovas ◽  
Alexander Katsevich ◽  
Michael Frenkel ◽  
William Thompson ◽  
Edward Morton

X-ray computed tomography (CT) can provide 3D images of density, and possibly the atomic number, for large objects like passenger luggage. This information, while generally very useful, is often insufficient to identify threats like explosives and narcotics, which can have a similar average composition as benign everyday materials such as plastics, glass, light metals, etc. A much more specific material signature can be measured with X-ray diffraction (XRD). Unfortunately, XRD signal is very faint compared to the transmitted one, and also challenging to reconstruct for objects larger than a small laboratory sample. In this article we analyze a novel low-cost scanner design which captures CT and XRD signals simultaneously, and uses the least possible collimation to maximize the flux. To simulate a realistic instrument, we derive a formula for the resolution of any diffraction pathway, taking into account the polychromatic spectrum, and the finite size of the source, detector, and each voxel. We then show how to reconstruct XRD patterns from a large phantom with multiple diffracting objects. Our approach includes a reasonable amount of photon counting noise (Poisson statistics), as well as measurement bias, in particular incoherent Compton scattering. The resolution of our reconstruction is sufficient to provide significantly more information than standard CT, thus increasing the accuracy of threat detection. Our theoretical model is implemented in GPU (Graphics Processing Unit) accelerated software which can be used to assess and further optimize scanner designs for specific applications in security, healthcare, and manufacturing quality control.


2011 ◽  
Vol 21 (01) ◽  
pp. 31-47 ◽  
Author(s):  
NOEL LOPES ◽  
BERNARDETE RIBEIRO

The Graphics Processing Unit (GPU) originally designed for rendering graphics and which is difficult to program for other tasks, has since evolved into a device suitable for general-purpose computations. As a result graphics hardware has become progressively more attractive yielding unprecedented performance at a relatively low cost. Thus, it is the ideal candidate to accelerate a wide variety of data parallel tasks in many fields such as in Machine Learning (ML). As problems become more and more demanding, parallel implementations of learning algorithms are crucial for a useful application. In particular, the implementation of Neural Networks (NNs) in GPUs can significantly reduce the long training times during the learning process. In this paper we present a GPU parallel implementation of the Back-Propagation (BP) and Multiple Back-Propagation (MBP) algorithms, and describe the GPU kernels needed for this task. The results obtained on well-known benchmarks show faster training times and improved performances as compared to the implementation in traditional hardware, due to maximized floating-point throughput and memory bandwidth. Moreover, a preliminary GPU based Autonomous Training System (ATS) is developed which aims at automatically finding high-quality NNs-based solutions for a given problem.


2009 ◽  
Vol 17 (25) ◽  
pp. 22320 ◽  
Author(s):  
Claudio Vinegoni ◽  
Lyuba Fexon ◽  
Paolo Fumene Feruglio ◽  
Misha Pivovarov ◽  
Jose-Luiz Figueiredo ◽  
...  

2013 ◽  
Vol 3 (4) ◽  
pp. 81-91 ◽  
Author(s):  
Sanjay P. Ahuja ◽  
Thomas F. Furman ◽  
Kerwin E. Roslie ◽  
Jared T. Wheeler

Amazon's Elastic Compute Cloud (EC2) Service is one of the leading public cloud service providers and offers many different levels of service. This paper looks into evaluating the memory, central processing unit (CPU), and input/output I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes. The results show that the scalability of the cloud is achieved by increasing resources when applicable. This paper also looks at the economic model and other cloud services offered by Amazon's EC2, Microsoft's Azure, and Google's App Engine.


2012 ◽  
Vol 463-464 ◽  
pp. 1073-1076
Author(s):  
Helmar Alvares ◽  
Eliana Prado Lopes Aude ◽  
Ernesto Prado Lopes

This work proposes a Web-Based laboratory where researchers share the facilities of a simulation environment for parallel algorithms which solves scheduling problems known as Job Shop Problem (JSP). The environment supports multi-language platforms and uses a low cost, high performance Graphics Processing Unit (GPU) connected to a Java application server to help design more efficient solutions for JSP. Within a single web environment one can analyze and compare different methods and meta-heuristics. Each newly developed method is stored in an environment library and made available to all other users of the environment. This amassment of openly accessible solution methods will allow for the rapid convergence towards optimal solutions for JSP. The algorithm uses the parallel architecture of the system to handle threads. Each thread represents a job operation and the number of threads scales with the problem’s size. The threads exchange information in order to find the best solution. This cooperation decreases response times by one or two orders of magnitude.


2011 ◽  
Vol 1 (32) ◽  
pp. 9 ◽  
Author(s):  
Robert Anthony Dalrymple ◽  
Alexis Herault ◽  
Giuseppe Bilotta ◽  
Rozita Jalali Farahani

This paper discusses the meshless numerical method Smoothed Particle Hydrodynamics and its application to water waves and nearshore circulation. In particularly we focus on an implementation of the model on the graphics processing unit (GPU) of computers, which permits low-cost supercomputing capabilities for certain types of computational problems. The implementation here runs on Nvidia graphics cards, from off-the-shelf laptops to the top-of-line Tesla cards for workstations with their current 480 massively parallel streaming processors. Here we apply the model to breaking waves and nearshore circulation, demonstrating that SPH can model changes in wave properties due to shoaling, refraction, and diffraction and wave-current interaction; as well as nonlinear phenomena such as harmonic generation, and, by using wave-period averaged quantities, such aspects of nearshore circulation as wave set-up, longshore currents, rip currents, and nearshore circulation gyres.


2012 ◽  
Vol 10 (H16) ◽  
pp. 679-680
Author(s):  
Christopher J. Fluke

AbstractAs we move ever closer to the Square Kilometre Array era, support for real-time, interactive visualisation and analysis of tera-scale (and beyond) data cubes will be crucial for on-going knowledge discovery. However, the data-on-the-desktop approach to analysis and visualisation that most astronomers are comfortable with will no longer be feasible: tera-scale data volumes exceed the memory and processing capabilities of standard desktop computing environments. Instead, there will be an increasing need for astronomers to utilise remote high performance computing (HPC) resources. In recent years, the graphics processing unit (GPU) has emerged as a credible, low cost option for HPC. A growing number of supercomputing centres are now investing heavily in GPU technologies to provide O(100) Teraflop/s processing. I describe how a GPU-powered computing cluster allows us to overcome the analysis and visualisation challenges of tera-scale data. With a GPU-based architecture, we have moved the bottleneck from processing-limited to bandwidth-limited, achieving exceptional real-time performance for common visualisation and data analysis tasks.


Author(s):  
Gudur Vamsi Krishna ◽  
K. F. Bharati

Cloud computing offers streamlined instruments for outstanding business efficiency processes. Cloud distributors typically give two distinct forms of usage plans: Reserved as well as On-demand. Restricted policies provide inexpensive long-term contracting services, while order contracts were very expensive and ready for brief rather than long longer periods. In order to satisfy current customer demands with equal rates, cloud resources must be delivered wisely. Many current works depend mainly on low-cost resource-reserved strategies, which may be under-provisioning and over-provisioning rather than costly ondemand solutions. Since unfairness can cause enormous high availability costs and cloud demand variability in the distribution of cloud resources, resource allocation has become an extremely challenging issue. The hybrid approach to allocating cloud services according to complex customer orders is suggested in that article. The strategy was constructed as a two-step mechanism consisting of accommodation stages and then a versatile structure. In this way, by constructing each step primarily as an optimization problem, we minimize the total cost of implementation, thereby preserving service quality. By modeling client prerequisites as probability distributions are disseminated owing to the dubious presence of cloud requests, we set up a stochastic Optimization-based approach. Using various approaches, our technique is applied, and the results demonstrate its effectiveness when assigning individual cloud resources.


2021 ◽  
Author(s):  
Airidas Korolkovas ◽  
Alexander Katsevich ◽  
Michael Frenkel ◽  
William Thompson ◽  
Edward Morton

X-ray computed tomography (CT) can provide 3D images of density, and possibly the atomic number, for large objects like passenger luggage. This information, while generally very useful, is often insufficient to identify threats like explosives and narcotics, which can have a similar average composition as benign everyday materials such as plastics, glass, light metals, etc. A much more specific material signature can be measured with X-ray diffraction (XRD). Unfortunately, XRD signal is very faint compared to the transmitted one, and also challenging to reconstruct for objects larger than a small laboratory sample. In this article we analyze a novel low-cost scanner design which captures CT and XRD signals simultaneously, and uses the least possible collimation to maximize the flux. To simulate a realistic instrument, we derive a formula for the resolution of any diffraction pathway, taking into account the polychromatic spectrum, and the finite size of the source, detector, and each voxel. We then show how to reconstruct XRD patterns from a large phantom with multiple diffracting objects. Our approach includes a reasonable amount of photon counting noise (Poisson statistics), as well as measurement bias, in particular incoherent Compton scattering. The resolution of our reconstruction is sufficient to provide significantly more information than standard CT, thus increasing the accuracy of threat detection. Our theoretical model is implemented in GPU (Graphics Processing Unit) accelerated software which can be used to assess and further optimize scanner designs for specific applications in security, healthcare, and manufacturing quality control.


Sign in / Sign up

Export Citation Format

Share Document