Raw Data Processing for Practical Time-of-Flight Super-Resolution

Author(s):  
Miguel Heredia Conde
1969 ◽  
Vol 6 (01) ◽  
pp. 48-57
Author(s):  
Edward S. Karlson ◽  
John J. Davis

An operational system for providing processed maintenance and repair information for vessels is described. Content includes description of a detailed coding system for reducing raw data to composite code numbers suitable for automatic data processing. Objectives of the system and constraints thereon are discussed. The Marad data system has been operational for four years. Scope of data processed and its utilization are presented. Seven current studies concerning vessels as a whole and specific shipboard equipments are included.


2017 ◽  
Vol 13 (S336) ◽  
pp. 443-444
Author(s):  
I. D. Litovchenko ◽  
S. F. Likhachev ◽  
V. I. Kostenko ◽  
I. A. Girin ◽  
V. A. Ladygin ◽  
...  

AbstractWe discuss specific aspects of space-ground VLBI (SVLBI) data processing of spectral line experiments (H2O & OH masers) in Radioastron project. In order to meet all technical requirements of the Radioastron mission a new software FX correlator (ASCFX) and the unique data archive which stores raw data from all VLBI stations for all experiments of the project were developed in Astro Space Center. Currently all maser observations conducted in Radioastron project were correlated using the ASCFX correlator. Positive detections on the space-ground baselines were found in 38 sessions out of 144 (detection rate of about 27%). Finally, we presented upper limits on the angular size of the most compact spots observed in two galactic H2O masers, W3OH(H2O) and OH043.8-0.1.


2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


Author(s):  
David Gelernter

we’ve installed the foundation piles and are ready to start building Mirror worlds. In this chapter we discuss (so to speak) the basement, in the next chapter we get to the attic, and the chapter after that fills in the middle region and glues the whole thing together. The basement we are about to describe is filled with lots of a certain kind of ensemble program. This kind of program, called a Trellis, makes the connection between external data and internal mirror-reality. The Trellis is, accordingly, a key player in the Mirror world cast. It’s also a good example of ensemble programming in general, and, I’ll argue, a highly significant gadget in itself. The hulking problem with which the Trellis does battle on the Mirror world’s behalf is a problem that the real world, too, will be confronting directly and in person very soon. Floods of data are pounding down all around us in torrents. How will we cope? what will we do with all this stuff? when the encroaching electronification of the world pushes the downpour rate higher by a thousand or a million times or more, what will we do then? Concretely: I’m talking about realtime data processing. The subject in this chapter is fresh data straight from the sensor. we’d like to analyze this fresh data in “realtime”—to achieve some understanding of data values as they emerge. Raw data pours into a Mirror world and gets refined by a data distillery in the basement. The processed, refined, one-hundredpercent pure stuff gets stored upstairs in the attic, where it ferments slowly into history. (In the next chapter we move upstairs.) Trellis programs are the topic here: how they are put together, how they work. But there’s an initial question that’s too important to ignore. we need to take a brief trip outside into the deluge, to establish what this stuff is and where it’s coming from. Data-gathering instruments are generally electronic. They are sensors in the field, dedicated to the non-stop, automatic gathering of measurements; or they are full-blown infomachines, waiting for people to sit down, log on and enter data by hand.


2019 ◽  
Vol 12 (3) ◽  
pp. 1871-1888 ◽  
Author(s):  
Felix Kelberlau ◽  
Jakob Mann

Abstract. Turbulent velocity spectra derived from velocity–azimuth display (VAD) scanning wind lidars deviate from spectra derived from one-point measurements due to averaging effects and cross-contamination among the velocity components. This work presents two novel methods for minimizing these effects through advanced raw data processing. The squeezing method is based on the assumption of frozen turbulence and introduces a time delay into the raw data processing in order to reduce cross-contamination. The two-beam method uses only certain laser beams in the reconstruction of wind vector components to overcome averaging along the measurement circle. Models are developed for conventional VAD scanning and for both new data processing methods to predict the spectra and identify systematic differences between the methods. Numerical modeling and comparison with measurement data were both used to assess the performance of the methods. We found that the squeezing method reduces cross-contamination by eliminating the resonance effect caused by the longitudinal separation of measurement points and also considerably reduces the averaging along the measurement circle. The two-beam method eliminates this averaging effect completely. The combined use of the squeezing and two-beam methods substantially improves the ability of VAD scanning wind lidars to measure in-wind (u) and vertical (w) fluctuations.


Sign in / Sign up

Export Citation Format

Share Document