Peculiarities of Maser Data Correlation / Postcorrelation in Radioastron Mission

2017 ◽  
Vol 13 (S336) ◽  
pp. 443-444
Author(s):  
I. D. Litovchenko ◽  
S. F. Likhachev ◽  
V. I. Kostenko ◽  
I. A. Girin ◽  
V. A. Ladygin ◽  
...  

AbstractWe discuss specific aspects of space-ground VLBI (SVLBI) data processing of spectral line experiments (H2O & OH masers) in Radioastron project. In order to meet all technical requirements of the Radioastron mission a new software FX correlator (ASCFX) and the unique data archive which stores raw data from all VLBI stations for all experiments of the project were developed in Astro Space Center. Currently all maser observations conducted in Radioastron project were correlated using the ASCFX correlator. Positive detections on the space-ground baselines were found in 38 sessions out of 144 (detection rate of about 27%). Finally, we presented upper limits on the angular size of the most compact spots observed in two galactic H2O masers, W3OH(H2O) and OH043.8-0.1.

2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110075
Author(s):  
Jean-Christophe Plantin

Archival data processing consists of cleaning and formatting data between the moment a dataset is deposited and its publication on the archive’s website. In this article, I approach data processing by combining scholarship on invisible labor in knowledge infrastructures with a Marxian framework and show the relevance of considering data processing as factory labor. Using this perspective to analyze ethnographic data collected during a six-month participatory observation at a U.S. data archive, I generate a taxonomy of the forms of alienation that data processing generates, but also the types of resistance that processors develop, across four categories: routine, speed, skill, and meaning. This synthetic approach demonstrates, first, that data processing reproduces typical forms of factory worker’s alienation: processors are asked to work along a strict standardized pipeline, at a fast pace, without acquiring substantive skills or having a meaningful involvement in their work. It reveals, second, how data processors resist the alienating nature of this workflow by developing multiple tactics along the same four categories. Seen through this dual lens, data processors are therefore not only invisible workers, but also factory workers who follow and subvert a workflow organized as an assembly line. I conclude by proposing a four-step framework to better value the social contribution of data workers beyond the archive.


1969 ◽  
Vol 6 (01) ◽  
pp. 48-57
Author(s):  
Edward S. Karlson ◽  
John J. Davis

An operational system for providing processed maintenance and repair information for vessels is described. Content includes description of a detailed coding system for reducing raw data to composite code numbers suitable for automatic data processing. Objectives of the system and constraints thereon are discussed. The Marad data system has been operational for four years. Scope of data processed and its utilization are presented. Seven current studies concerning vessels as a whole and specific shipboard equipments are included.


2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


Author(s):  
David Gelernter

we’ve installed the foundation piles and are ready to start building Mirror worlds. In this chapter we discuss (so to speak) the basement, in the next chapter we get to the attic, and the chapter after that fills in the middle region and glues the whole thing together. The basement we are about to describe is filled with lots of a certain kind of ensemble program. This kind of program, called a Trellis, makes the connection between external data and internal mirror-reality. The Trellis is, accordingly, a key player in the Mirror world cast. It’s also a good example of ensemble programming in general, and, I’ll argue, a highly significant gadget in itself. The hulking problem with which the Trellis does battle on the Mirror world’s behalf is a problem that the real world, too, will be confronting directly and in person very soon. Floods of data are pounding down all around us in torrents. How will we cope? what will we do with all this stuff? when the encroaching electronification of the world pushes the downpour rate higher by a thousand or a million times or more, what will we do then? Concretely: I’m talking about realtime data processing. The subject in this chapter is fresh data straight from the sensor. we’d like to analyze this fresh data in “realtime”—to achieve some understanding of data values as they emerge. Raw data pours into a Mirror world and gets refined by a data distillery in the basement. The processed, refined, one-hundredpercent pure stuff gets stored upstairs in the attic, where it ferments slowly into history. (In the next chapter we move upstairs.) Trellis programs are the topic here: how they are put together, how they work. But there’s an initial question that’s too important to ignore. we need to take a brief trip outside into the deluge, to establish what this stuff is and where it’s coming from. Data-gathering instruments are generally electronic. They are sensors in the field, dedicated to the non-stop, automatic gathering of measurements; or they are full-blown infomachines, waiting for people to sit down, log on and enter data by hand.


2019 ◽  
Vol 12 (3) ◽  
pp. 1871-1888 ◽  
Author(s):  
Felix Kelberlau ◽  
Jakob Mann

Abstract. Turbulent velocity spectra derived from velocity–azimuth display (VAD) scanning wind lidars deviate from spectra derived from one-point measurements due to averaging effects and cross-contamination among the velocity components. This work presents two novel methods for minimizing these effects through advanced raw data processing. The squeezing method is based on the assumption of frozen turbulence and introduces a time delay into the raw data processing in order to reduce cross-contamination. The two-beam method uses only certain laser beams in the reconstruction of wind vector components to overcome averaging along the measurement circle. Models are developed for conventional VAD scanning and for both new data processing methods to predict the spectra and identify systematic differences between the methods. Numerical modeling and comparison with measurement data were both used to assess the performance of the methods. We found that the squeezing method reduces cross-contamination by eliminating the resonance effect caused by the longitudinal separation of measurement points and also considerably reduces the averaging along the measurement circle. The two-beam method eliminates this averaging effect completely. The combined use of the squeezing and two-beam methods substantially improves the ability of VAD scanning wind lidars to measure in-wind (u) and vertical (w) fluctuations.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 967 ◽  
Author(s):  
Ting-Li Han ◽  
Yang Yang ◽  
Hua Zhang ◽  
Kai P. Law

Background: A challenge of metabolomics is data processing the enormous amount of information generated by sophisticated analytical techniques. The raw data of an untargeted metabolomic experiment are composited with unwanted biological and technical variations that confound the biological variations of interest. The art of data normalisation to offset these variations and/or eliminate experimental or biological biases has made significant progress recently. However, published comparative studies are often biased or have omissions. Methods: We investigated the issues with our own data set, using five different representative methods of internal standard-based, model-based, and pooled quality control-based approaches, and examined the performance of these methods against each other in an epidemiological study of gestational diabetes using plasma. Results: Our results demonstrated that the quality control-based approaches gave the highest data precision in all methods tested, and would be the method of choice for controlled experimental conditions. But for our epidemiological study, the model-based approaches were able to classify the clinical groups more effectively than the quality control-based approaches because of their ability to minimise not only technical variations, but also biological biases from the raw data. Conclusions: We suggest that metabolomic researchers should optimise and justify the method they have chosen for their experimental condition in order to obtain an optimal biological outcome.


1983 ◽  
Vol 104 ◽  
pp. 85-86
Author(s):  
R. A. Laing ◽  
F. N. Owen ◽  
J. J. Puschell

This paper is concerned with the distant radio galaxies in a sample of bright sources selected at 178 MHz by Laing, Riley & Longair (1982). This sample is 96% complete for sources with θ < 10′ and the bias of the 3CR catalogue against sources of large angular size has also been reduced. Deep optical searches have located many candidate identifications, but the probability of a chance coincidence with an unrelated object is appreciable, especially in the faintest cases, unless the area to be searched is small. We have therefore mapped the sources with candidate identifications having V > 20, using the VLA at a wavelength of 6 cm (Laing, Owen & Puschell, in preparation), in order to search for radio cores. We have so far located cores in 16/23 sources and set 5σ upper limits of 0.6 mJy for the remainder. None of the cores had been detected previously. In all cases, the cores coincide with optical objects, although one source (3C 340) had been misidentified. Several ambiguities have now been resolved.


2019 ◽  
Vol 2 (1) ◽  
pp. 61-73
Author(s):  
Pankaj Lathar ◽  
K. G. Srinivasa

With the advancements in science and technology, data is being generated at a staggering rate. The raw data generated is generally of high value and may conceal important information with the potential to solve several real-world problems. In order to extract this information, the raw data available must be processed and analysed efficiently. It has however been observed, that such raw data is generated at a rate faster than it can be processed by traditional methods. This has led to the emergence of the popular parallel processing programming model – MapReduce. In this study, the authors perform a comparative analysis of two popular data processing engines – Apache Flink and Hadoop MapReduce. The analysis is based on the parameters of scalability, reliability and efficiency. The results reveal that Flink unambiguously outperformance Hadoop's MapReduce. Flink's edge over MapReduce can be attributed to following features – Active Memory Management, Dataflow Pipelining and an Inline Optimizer. It can be concluded that as the complexity and magnitude of real time raw data is continuously increasing, it is essential to explore newer platforms that are adequately and efficiently capable of processing such data.


2000 ◽  
Vol 18 (9) ◽  
pp. 1231-1241 ◽  
Author(s):  
J. M. Holt ◽  
P. J. Erickson ◽  
A. M. Gorczyca ◽  
T. Grydeland

Abstract. The Millstone Hill Incoherent Scatter Data Acquisition System (MIDAS) is based on an abstract model of an incoherent scatter radar. This model is implemented in a hierarchical software system, which serves to isolate hardware and low-level software implementation details from higher levels of the system. Inherent in this is the idea that implementation details can easily be changed in response to technological advances. MIDAS is an evolutionary system, and the MIDAS hardware has, in fact, evolved while the basic software model has remained unchanged. From the earliest days of MIDAS, it was realized that some functions implemented in specialized hardware might eventually be implemented by software in a general-purpose computer. MIDAS-W is the realization of this concept. The core component of MIDAS-W is a Sun Microsystems UltraSparc 10 workstation equipped with an Ultrarad 1280 PCI bus analog to digital (A/D) converter board. In the current implementation, a 2.25 MHz intermediate frequency (IF) is bandpass sampled at 1 µs intervals and these samples are multicast over a high-speed Ethernet which serves as a raw data bus. A second workstation receives the samples, converts them to filtered, decimated, complex baseband samples and computes the lag-profile matrix of the decimated samples. Overall performance is approximately ten times better than the previous MIDAS system, which utilizes a custom digital filtering module and array processor based correlator. A major advantage of MIDAS-W is its flexibility. A portable, single-workstation data acquisition system can be implemented by moving the software receiver and correlator programs to the workstation with the A/D converter. When the data samples are multicast, additional data processing systems, for example for raw data recording, can be implemented simply by adding another workstation with suitable software to the high-speed network. Testing of new data processing software is also greatly simplified, because a workstation with the new software can be added to the network without impacting the production system. MIDAS-W has been operated in parallel with the existing MIDAS-1 system to verify that incoherent scatter measurements by the two systems agree. MIDAS-W has also been used in a high-bandwidth mode to collect data on the November, 1999, Leonid meteor shower.Key words: Electromagnetics (instruments and techniques; signal processing and adaptive antennas) – Ionosphere (instruments and techniques)


Sign in / Sign up

Export Citation Format

Share Document