Distributed Formulation of Artificial Reconstruction Technique with Reordering of Critical Data Sets

Author(s):  
Boguslaw Butrylo ◽  
Marek Tudruj ◽  
Lukasz Masko
2017 ◽  
Author(s):  
Ye Yuan ◽  
Ludwig Ries ◽  
Hannes Petermeier ◽  
Martin Steinbacher ◽  
Angel J. Gómez-Peláez ◽  
...  

Abstract. Critical data selection is essential for determining representative baseline levels of atmospheric trace gas measurements even at remote measuring sites. Different data selection techniques have been used around the world which could potentially lead to bias when comparing data from different stations. This paper presents a novel statistical data selection method based on CO2 diurnal pattern occurring typically at high elevated mountain stations. Its capability and applicability was studied for atmospheric measuring records of CO2 from 2010 to 2016 at six Global Atmosphere Watch (GAW) stations in Europe, namely Zugspitze-Schneefernerhaus (Germany), Sonnblick (Austria), Jungfraujoch (Switzerland), Izaña (Spain), Schauinsland (Germany) and Hohenpeissenberg (Germany). Three other frequently applied statistical data selection methods were implemented for comparison. Among all selection routines, the new method named Adaptive Baseline Finder (ABF) resulted in lower selection percentages with lower maxima during winter and higher minima during summer in the selected data. To investigate long-term trend and seasonality, seasonal decomposition technique STL was applied. Compared with the unselected data, mean annual growth rates of all selected data sets were not significantly different except for Schauinsland. However, clear differences were found in the annual amplitudes as well as for the seasonal time structure. Based on correlation analysis, results by ABF selection showed a better representation of the lower free tropospheric conditions.


2019 ◽  
Vol 28 (4) ◽  
pp. 533-547
Author(s):  
A.A. Haseena Thasneem ◽  
M. Mohamed Sathik ◽  
R. Mehaboobathunnisa

Abstract The three-dimensional (3D) reconstruction of medical images usually requires hundreds of two-dimensional (2D) scan images. Segmentation, an obligatory part in reconstruction, needs to be performed for all the slices consuming enormous storage space and time. To reduce storage space and time, this paper proposes a three-stage procedure, namely, slice selection, segmentation and interpolation. The methodology will have the potential to 3D reconstruct the human head from minimum selected slices. The first stage of slice selection is based on structural similarity measurement, discarding the most similar slices with none or minimal impact on details. The second stage of segmentation of the selected slices is performed using our proposed phase-field segmentation method. Validation of our segmentation results is done via comparison with other deformable models, and results show that the proposed method provides fast and accurate segmentation. The third stage of interpolation is based on modified curvature registration-based interpolation, and it is applied to re-create the discarded slices. This method is compared to both standard linear interpolation and registration-based interpolation in 100 tomographic data sets. Results show that the modified curvature registration-based interpolation reconstructs missing slices with 96% accuracy and shows an improvement in sensitivity (95.802%) on par with specificity (95.901%).


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Saeed Seyyedi ◽  
Kubra Cengiz ◽  
Mustafa Kamasak ◽  
Isa Yildirim

Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.


2009 ◽  
Vol 9 (1) ◽  
pp. 70-81 ◽  
Author(s):  
Meredith Skeels ◽  
Bongshin Lee ◽  
Greg Smith ◽  
George G. Robertson

Uncertainty in data occurs in domains ranging from natural science to medicine to computer science. By developing ways to include uncertainty in our information visualizations, we can provide more accurate depictions of critical data sets so that people can make more informed decisions. One hindrance to visualizing uncertainty is that we must first understand what uncertainty is and how it is expressed. We reviewed existing work from several domains on uncertainty and created a classification of uncertainty based on the literature. We empirically evaluated and improved upon our classification by conducting interviews with 18 people from several domains, who self-identified as working with uncertainty. Participants described what uncertainty looks like in their data and how they deal with it. We found commonalities in uncertainty across domains and believe our refined classification will help us in developing appropriate visualizations for each category of uncertainty.


2018 ◽  
Vol 5 (1) ◽  
pp. 205395171875668 ◽  
Author(s):  
Kristin Veel

With slogans such as ‘Tell the stories hidden in your data’ ( www.narrativescience.com ) and ‘From data to clear, insightful content – Wordsmith automatically generates narratives on a massive scale that sound like a person crafted each one’ ( www.automatedinsights.com ), a series of companies currently market themselves on the ability to turn data into stories through Natural Language Generation (NLG) techniques. The data interpretation and knowledge production process is here automated, while at the same time hailing narrativity as a fundamental human ability of meaning-making. Reading both the marketing rhetoric and the functionality of the automated narrative services through narrative theory allows for a contextualization of the rhetoric flourishing in Big Data discourse. Building upon case material obtained from companies such as Arria NLG, Automated Insights, Narrativa, Narrative Science, and Yseop, this article argues that what might be seen as a ‘re-turn’ of narrative as a form of knowledge production that can make sense of large data sets inscribes itself in – but also rearticulates – an ongoing debate about what narrative entails. Methodological considerations are thus raised on the one hand about the insights to be gained for critical data studies by turning to literary theory, and on the other hand about how automated technologies may inform our understanding of narrative as a faculty of human meaning-making.


2019 ◽  
Vol 8 (2) ◽  
pp. 3516-3519

The mining of affiliation rules remains a well-enjoyed and successful procedure for getting critical data from monstrous data sets. It attempts to look out feasible connections between things in monstrous data sets upheld exchanges. Visit examples ought to be created to frame these affiliations. The "R-APRIORI" standard and its arrangement of improved variations, that were one in all the soonest visit design age calculations arranged, remain a most well known option because of their easy to execute and parallel to the common inclination. despite the fact that there are a few conservative single-machine methodologies for Apriori, the huge amount of data by and by open so much surpasses the capacity of 1 machine. In this way, it's important to scale over numerous machines to satisfy the regularly developing requests of this data. Guide cut back could be a well-loved distributable adaptation to non-critical failure structure. Be that as it may, genuine circle I/O in each Map cut back activity obstructs the efficient usage unvaried Map cut back information handling calculations like Apriori Platforms. An as of late arranged distributable data stream stage Sparkle beats the Map cut back I/O circle bottlenecks. Shimmer so gives an ideal stage to circulation Apriori. In any case, the principal computationally costly errand inside the execution of Apriori is to thought of applicant sets with everysingle possible go after singleton visit things and to check each match with each managing record. Here we tend to propose a spic and span approach that drastically decreases this methodology multifaceted nature by dispensing with the progression of creating applicants and maintaining a strategic distance from costly examinations. We stock out in– profundity trials to discover the power and quantifiability of our methodology. Our investigations demonstrate that our methodology commonly beats Sparkle'sexemplary Apriori and dynamic for different data sets


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Mark Ellisman ◽  
Maryann Martone ◽  
Gabriel Soto ◽  
Eleizer Masliah ◽  
David Hessler ◽  
...  

Structurally-oriented biologists examine cells, tissues, organelles and macromolecules in order to gain insight into cellular and molecular physiology by relating structure to function. The understanding of these structures can be greatly enhanced by the use of techniques for the visualization and quantitative analysis of three-dimensional structure. Three projects from current research activities will be presented in order to illustrate both the present capabilities of computer aided techniques as well as their limitations and future possibilities.The first project concerns the three-dimensional reconstruction of the neuritic plaques found in the brains of patients with Alzheimer's disease. We have developed a software package “Synu” for investigation of 3D data sets which has been used in conjunction with laser confocal light microscopy to study the structure of the neuritic plaque. Tissue sections of autopsy samples from patients with Alzheimer's disease were double-labeled for tau, a cytoskeletal marker for abnormal neurites, and synaptophysin, a marker of presynaptic terminals.


Author(s):  
Douglas L. Dorset

The quantitative use of electron diffraction intensity data for the determination of crystal structures represents the pioneering achievement in the electron crystallography of organic molecules, an effort largely begun by B. K. Vainshtein and his co-workers. However, despite numerous representative structure analyses yielding results consistent with X-ray determination, this entire effort was viewed with considerable mistrust by many crystallographers. This was no doubt due to the rather high crystallographic R-factors reported for some structures and, more importantly, the failure to convince many skeptics that the measured intensity data were adequate for ab initio structure determinations.We have recently demonstrated the utility of these data sets for structure analyses by direct phase determination based on the probabilistic estimate of three- and four-phase structure invariant sums. Examples include the structure of diketopiperazine using Vainshtein's 3D data, a similar 3D analysis of the room temperature structure of thiourea, and a zonal determination of the urea structure, the latter also based on data collected by the Moscow group.


Author(s):  
W. Shain ◽  
H. Ancin ◽  
H.C. Craighead ◽  
M. Isaacson ◽  
L. Kam ◽  
...  

Neural protheses have potential to restore nervous system functions lost by trauma or disease. Nanofabrication extends this approach to implants for stimulating and recording from single or small groups of neurons in the spinal cord and brain; however, tissue compatibility is a major limitation to their practical application. We are using a cell culture method for quantitatively measuring cell attachment to surfaces designed for nanofabricated neural prostheses.Silicon wafer test surfaces composed of 50-μm bars separated by aliphatic regions were fabricated using methods similar to a procedure described by Kleinfeld et al. Test surfaces contained either a single or double positive charge/residue. Cyanine dyes (diIC18(3)) stained the background and cell membranes (Fig 1); however, identification of individual cells at higher densities was difficult (Fig 2). Nuclear staining with acriflavine allowed discrimination of individual cells and permitted automated counting of nuclei using 3-D data sets from the confocal microscope (Fig 3). For cell attachment assays, LRM5 5 astroglial cells and astrocytes in primary cell culture were plated at increasing cell densities on test substrates, incubated for 24 hr, fixed, stained, mounted on coverslips, and imaged with a 10x objective.


Sign in / Sign up

Export Citation Format

Share Document