High-performance visualisation of UAV sensor and image data with raster maps and topography in 3D

2014 ◽  
Vol 5 (3) ◽  
pp. 244-262 ◽  
Author(s):  
Daniel Stødle ◽  
Njål T. Borch ◽  
Stian A. Solbø ◽  
Rune Storvold
Author(s):  
Obed Appiah ◽  
James Benjamin Hayfron-Acquah ◽  
Michael Asante

For computer vision systems to effectively perform diagnoses, identification, tracking, monitoring and surveillance, image data must be devoid of noise. Various types of noises such as Salt-and-pepper or Impulse, Gaussian, Shot, Quantization, Anisotropic, and Periodic noises corrupts images making it difficult to extract relevant information from them. This has led to a lot of proposed algorithms to help fix the problem. Among the proposed algorithms, the median filter has been successful in handling salt-and-pepper noise and preserving edges in images. However, its moderate to high running time and poor performance when images are corrupted with high densities of noise, has led to various proposed modifications of the median filter. The challenge observed with all these modifications is the trade-off between efficient running time and quality of denoised images. This paper proposes an algorithm that delivers quality denoised images in low running time. Two state-of-the-art algorithms are combined into one and a technique called Mid-Value-Decision-Median introduced into the proposed algorithm to deliver high quality denoised images in real-time. The proposed algorithm, High-Performance Modified Decision Based Median Filter (HPMDBMF) runs about 200 times faster than the state-of-the-art Modified Decision Based Median Filter (MDBMF) and still generate equivalent output.


10.14311/981 ◽  
2008 ◽  
Vol 48 (3) ◽  
Author(s):  
S. Gross ◽  
T. Stehle

Imaging technology is highly important in today’s medical environments. It provides information upon which the accuracy of the diagnosis and consequently the wellbeing of the patient rely. Increasing the quality and significance of medical image data is therefore one the aims of scientific research and development. We introduce an integrated hardware and software framework for real time image processing in medical environments, which we call RealTimeFrame. Our project is designed to offer flexibility, easy expandability and high performance. We use standard personal computer hardware to run our multithreaded software. A frame grabber card is used to capture video signals from medical imaging systems. A modular, user-defined process chain performs arbitrary manipulations on the image data. The graphical user interface offers configuration options and displays the processed image in either window or full screen mode. Image source and processing routines are encapsulated in dynamic library modules for easy functionality extension without recompilation of the entire software framework. Documented template modules for sources and processing steps are part of the software’s source code.


2021 ◽  
Author(s):  
Matthias Arzt ◽  
Joran Deschamps ◽  
Christopher Schmied ◽  
Tobias Pietzsch ◽  
Deborah Schmidt ◽  
...  

We present Labkit, a user-friendly Fiji plugin for the segmentation of microscopy image data. It offers easy to use manual and automated image segmentation routines that can be rapidly applied to single- and multi-channel images as well as to timelapse movies in 2D or 3D. Labkit is specifically designed to work efficiently on big image data and enables users of consumer laptops to conveniently work with multiple-terabyte images. This efficiency is achieved by using ImgLib2 and BigDataViewer as the foundation of our software. Furthermore, memory efficient and fast random forest based pixel classification inspired by the Waikato Environment for Knowledge Analysis (Weka) is implemented. Optionally we harness the power of graphics processing units (GPU) to gain additional runtime performance. Labkit is easy to install on virtually all laptops and workstations. Additionally, Labkit is compatible with high performance computing (HPC) clusters for distributed processing of big image data. The ability to use pixel classifiers trained in Labkit via the ImageJ macro language enables our users to integrate this functionality as a processing step in automated image processing workflows. Last but not least, Labkit comes with rich online resources such as tutorials and examples that will help users to familiarize themselves with available features and how to best use \Labkit in a number of practical real-world use-cases.


Author(s):  
W. W. Song ◽  
B. X. Jin ◽  
S. H. Li ◽  
X. Y. Wei ◽  
D. Li ◽  
...  

Traditional geospatial information platforms are built, managed and maintained by the geoinformation agencies. They integrate various geospatial data (such as DLG, DOM, DEM, gazetteers, and thematic data) to provide data analysis services for supporting government decision making. In the era of big data, it is challenging to address the data- and computing- intensive issues by traditional platforms. In this research, we propose to build a spatiotemporal cloud platform, which uses HDFS for managing image data, and MapReduce-based computing service and workflow for high performance geospatial analysis, as well as optimizing auto-scaling algorithms for Web client users’ quick access and visualization. Finally, we demonstrate the feasibility by several GIS application cases.


Author(s):  
Tianpeng Fan ◽  
Zhe Sun ◽  
Xiaoshen Zhang ◽  
Xunshi Yan ◽  
Jingjing Zhao ◽  
...  

Active magnetic bearing technology is used more and more for its high performance, such as high speed and frictionless operation. But the rotor vibrates sometimes during operation due to the existence of residual unbalanced mass, which may affect the security of the whole system. In order to determine the distribution of residual unbalanced mass, this paper proposes a method based on frequency response, control current analysis, and image data processing. The theoretical and calculated results show the validity of the method.


Author(s):  
Jin Hua Zhong ◽  
Wan Fang

In order to optimize the workflow of iterative 3D reconstruction and support the goal of massive image data processing, high performance and high scalability, this article proposes an image distributed computing framework FIODH. It is a distributed computing framework based on distributed hash algorithm, which accomplishes the task of storing, calculating and merging the image data in multiple nodes. A SIFT algorithm is used to extract feature points from the original images which are distributed on the hash nodes. During the process of image clustering computation, the agent nodes are responsible for task management and intermediate result calculation. The clustering results in hierarchical trees which can be converted into computational tasks and assigned to the appropriate nodes. The experimental analysis shows that the algorithm has achieved satisfactory results in efficiency and error adjustment. In a large amount of experiment data, the advantage of the algorithm is more obvious.


2013 ◽  
Vol 380-384 ◽  
pp. 4002-4006 ◽  
Author(s):  
Tu You Peng ◽  
Yang Zhao

The edge of image is one of the important features of the image, edge detection is an important means to extract image features. As the most popular high-performance processing technology, GPU parallel technology is on of the best choices for parallel Prewitt algorithm implementation. Since conventional Prewitt algorithm based upon CPU is computationally intensive, time-consuming, its application is very restricted. In order to improve the efficiency of Prewitt algorithm, CUDA-based parallel Prewitt algorithm and fast imaging algorithm are applied to get higher speedup. Finally, an effective method is proposed by turning the GPR field data into gray-scale image data, then implementation of GPR field data processing with the Prewitt algorithm based upon CUDA. Numerical results on GPR field data have shown that the algorithm is not only of high efficiency, but effective to improve target identification capability based upon GPR.


2020 ◽  
Author(s):  
Victor Anton ◽  
Jannes Germishuys ◽  
Matthias Obst

This paper describes a data system to analyse large amounts of subsea movie data for marine ecological research. The system consists of three distinct modules for data management and archiving, citizen science, and machine learning in a high performance computation environment. It allows scientists to upload underwater footage to a customised citizen science website hosted by Zooniverse, where volunteers from the public classify the footage. Classifications with high agreement among citizen scientists are then used to train machine learning algorithms. An application programming interface allows researchers to test the algorithms and track biological objects in new footage. We tested our system using recordings from remotely operated vehicles (ROVs) in a Marine Protected Area, the Kosterhavet National Park in Sweden. Results indicate a strong decline of cold-water corals in the park over a period of 15 years, showing that our system allows to effectively extract valuable occurrence and abundance data for key ecological species from underwater footage. We argue that the combination of citizen science tools, machine learning, and high performance computers are key to successfully analyse large amounts of image data in the future, suggesting that these services should be consolidated and interlinked by national and international research infrastructures. Novel information system to analyse marine underwater footage.


Author(s):  
Moritz Waldmann ◽  
Alice Grosch ◽  
Christian Witzler ◽  
Matthias Lehner ◽  
Odo Benda ◽  
...  

AbstractPhysics-based analyses have the potential to consolidate and substantiate medical diagnoses in rhinology. Such methods are frequently subject to intense investigations in research. However, they are not used in clinical applications, yet. One issue preventing their direct integration is that these methods are commonly developed as isolated solutions which do not consider the whole chain of data processing from initial medical to higher valued data. This manuscript presents a workflow that incorporates the whole data processing pipeline based on a environment. Therefore, medical image data are fully automatically pre-processed by machine learning algorithms. The resulting geometries employed for the simulations on high-performance computing systems reach an accuracy of up to 99.5% compared to manually segmented geometries. Additionally, the user is enabled to upload and visualize 4-phase rhinomanometry data. Subsequent analysis and visualization of the simulation outcome extend the results of standardized diagnostic methods by a physically sound interpretation. Along with a detailed presentation of the methodologies, the capabilities of the workflow are demonstrated by evaluating an exemplary medical case. The pipeline output is compared to 4-phase rhinomanometry data. The comparison underlines the functionality of the pipeline. However, it also illustrates the influence of mucosa swelling on the simulation.


Sign in / Sign up

Export Citation Format

Share Document