scholarly journals Source Detection with Interferometric Datasets

2011 ◽  
Vol 7 (S285) ◽  
pp. 414-416
Author(s):  
Cathryn M. Trott ◽  
Randall B. Wayth ◽  
Jean-Pierre R. Macquart ◽  
Steven J. Tingay

AbstractThe detection of sources in interferometric radio data typically relies on extracting information from images, formed by Fourier transform of the underlying visibility dataset, and CLEANed of contaminating sidelobes through iterative deconvolution. Variable and transient radio sources span a large range of variability timescales, and their study has the potential to enhance our knowledge of the dynamic universe. Their detection and classification involve large data rates and non-stationary PSFs, commensal observing programs and ambitious science goals, and will demand a paradigm shift in the deployment of next-generation instruments. Optimal source detection and classification in real time requires efficient and automated algorithms. On short time-scales variability can be probed with an optimal matched filter detector applied directly to the visibility dataset. This paper shows the design of such a detector, and some preliminary detection performance results.

2014 ◽  
Vol 08 (02) ◽  
pp. 209-227 ◽  
Author(s):  
Håkon Kvale Stensland ◽  
Vamsidhar Reddy Gaddam ◽  
Marius Tennøe ◽  
Espen Helgedagsrud ◽  
Mikkel Næss ◽  
...  

There are many scenarios where high resolution, wide field of view video is useful. Such panorama video may be generated using camera arrays where the feeds from multiple cameras pointing at different parts of the captured area are stitched together. However, processing the different steps of a panorama video pipeline in real-time is challenging due to the high data rates and the stringent timeliness requirements. In our research, we use panorama video in a sport analysis system called Bagadus. This system is deployed at Alfheim stadium in Tromsø, and due to live usage, the video events must be generated in real-time. In this paper, we describe our real-time panorama system built using a low-cost CCD HD video camera array. We describe how we have implemented different components and evaluated alternatives. The performance results from experiments ran on commodity hardware with and without co-processors like graphics processing units (GPUs) show that the entire pipeline is able to run in real-time.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e6142
Author(s):  
Therese A. Catanach ◽  
Andrew D. Sweet ◽  
Nam-phuong D. Nguyen ◽  
Rhiannon M. Peery ◽  
Andrew H. Debevec ◽  
...  

Aligning sequences for phylogenetic analysis (multiple sequence alignment; MSA) is an important, but increasingly computationally expensive step with the recent surge in DNA sequence data. Much of this sequence data is publicly available, but can be extremely fragmentary (i.e., a combination of full genomes and genomic fragments), which can compound the computational issues related to MSA. Traditionally, alignments are produced with automated algorithms and then checked and/or corrected “by eye” prior to phylogenetic inference. However, this manual curation is inefficient at the data scales required of modern phylogenetics and results in alignments that are not reproducible. Recently, methods have been developed for fully automating alignments of large data sets, but it is unclear if these methods produce alignments that result in compatible phylogenies when compared to more traditional alignment approaches that combined automated and manual methods. Here we use approximately 33,000 publicly available sequences from the hepatitis B virus (HBV), a globally distributed and rapidly evolving virus, to compare different alignment approaches. Using one data set comprised exclusively of whole genomes and a second that also included sequence fragments, we compared three MSA methods: (1) a purely automated approach using traditional software, (2) an automated approach including by eye manual editing, and (3) more recent fully automated approaches. To understand how these methods affect phylogenetic results, we compared resulting tree topologies based on these different alignment methods using multiple metrics. We further determined if the monophyly of existing HBV genotypes was supported in phylogenies estimated from each alignment type and under different statistical support thresholds. Traditional and fully automated alignments produced similar HBV phylogenies. Although there was variability between branch support thresholds, allowing lower support thresholds tended to result in more differences among trees. Therefore, differences between the trees could be best explained by phylogenetic uncertainty unrelated to the MSA method used. Nevertheless, automated alignment approaches did not require human intervention and were therefore considerably less time-intensive than traditional approaches. Because of this, we conclude that fully automated algorithms for MSA are fully compatible with older methods even in extremely difficult to align data sets. Additionally, we found that most HBV diagnostic genotypes did not correspond to evolutionarily-sound groups, regardless of alignment type and support threshold. This suggests there may be errors in genotype classification in the database or that HBV genotypes may need a revision.


2013 ◽  
Vol 278-280 ◽  
pp. 1767-1770 ◽  
Author(s):  
Guo You Chen ◽  
Jia Jia Miao ◽  
Feng Xie ◽  
Han Dong Mao

Cloud computing has been a hot researching area of computer network technology, since it was proposed in 2007. Cloud computing also has been envisioned as the next-generation architecture of IT Enterprise [1]. Cloud computing infrastructures enable companies to cut costs by outsourcing computations on-demand. It moves the application software and database to the large data centers, where the management of the data and services may not be fully trustworthy [2]. This poses many new security challenges. In this paper, we just focus on data storage security in the cloud, which has been the most important aspect of quality of service. To ensure the confidentiality and integrity of user’s data in the cloud, and support of data dynamic operations, such as modification, insertion and deletion, we propose a framework for storage security, which includes cryptographic storage scheme and data structure. With our framework, the untrusted server cannot learn anything about the plaintext, and the dynamic operation can be finished in short time. The encryption algorithm and data storage structure is simple, and it’s easy to maintain. Hence, our framework is practical to use today.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Diego Herranz ◽  
Francisco Argüeso ◽  
Pedro Carvalho

We describe the state-of-the art status of multifrequency detection techniques for compact sources in microwave astronomy. From the simplest cases where the spectral behaviour is well known (i.e., thermal SZ clusters) to the more complex cases where there is little a priori information (i.e., polarized radio sources) we will review the main advances and the most recent results in the detection problem.


2013 ◽  
Vol 411-414 ◽  
pp. 1581-1587
Author(s):  
Gai Fang Wang ◽  
Feng Feng Fan ◽  
Xi Tao Xing ◽  
Yong Wang

With the rapid development of sensor technology recently, sensors have been applied to various fields for detecting object states, e.g. intelligent agriculture, intelligent power, intelligent city, the Internet of Things, etc., and have becoming more and more critical for dynamic data acquisition. Due to detection environment, detection technology, costs and other factors, access to actual sensors for developing or debugging a sensor application may cause additional costs and time. Meanwhile, testing new sensor applications and protocols needs appropriate feasible ways with low costs and short time. Therefore, it is fairly urgent for designing and developing a simulation environment of sensors and sensor applications. This paper parsed the general structure of digital sensors, and then designed domain-based high level architecture of digital sensor simulator. Finally, the prototype of digital sensor simulator was developed and demonstrated the proper performance. Results show that digital sensor simulator would provide an effective way for testing novel sensors and protocols and also play an important role for constituting sensor network simulation environment.


2008 ◽  
Vol 65 (3) ◽  
pp. 371-378 ◽  
Author(s):  
Ted T. Packard ◽  
May Gómez

AbstractPackard, T. T., and Gómez, M. 2008. Exploring a first-principles-based model for zooplankton respiration. – ICES Journal of Marine Science, 65: 371–378. Oxygen consumption (R) is caused by the respiratory electron transfer system (ETS), not biomass. ETS is ubiquitous in zooplankton, determines the level of potential respiration (Φ), and is the enzyme system that ultimately oxidizes the products of food digestion, makes ATP, and consumes O2. Current respiration hypotheses are based on allometric relationships between R and biomass. The most accepted version at constant temperature (T) is R = i0M0.75, where i0 is a constant. We argue that, for zooplankton, a Φ-based, O2-consuming algorithm is more consistent with the cause of respiration. Our point: although biomass is related to respiration, the first-principles cause of respiration is ETS, because it controls O2 consumption. Biomass itself is indirectly related to respiration, because it packages the ETS. Consequently, we propose bypassing the packaging and modelling respiration from ETS and hence Φ. This Φ is regulated by T, according to Arrhenius theory, and by specific reactants (S) that sustain the redox reactions of O2 consumption, according to Michaelis–Menten kinetics. Our model not only describes respiration over a large range of body sizes but also explains and accurately predicts respiration on short time-scales. At constant temperature, our model takes the form: where Ea is the Arrhenius activation energy, Rg, the gas constant, and Km, the Michaelis–Menten constant.


2018 ◽  
Vol 9 (34) ◽  
pp. 97-104
Author(s):  
Ufuk ÇELİK ◽  
Eyüp AKÇETİN

Process mining is a new era in the science of data mining and is a subset of business intelligence. Process mining analysis provides an idea about a general process by comparing each process with others in the terms of time and responsible people who deal with the process. For this reason, event logs are checked. Event logs consist of large data. Because the event logs keep all the records that occur during short time intervals. Special programs are needed to examine such data. These programs generate a process map using information such as event ID, activity, time and responsible person. Through the analysis, processes are discovered, monitored and improved. In this study, the tools named ProM, Disco, Celonis and My-Invenio used in process mining were examined and their performance according to usage features compared. According to the obtained results, the usefulness, performance and reporting features of the software used in a process analysis are revealed.


2020 ◽  
Vol 7 (5) ◽  
pp. 923
Author(s):  
Eko Arifianto ◽  
Aghus Sofwan ◽  
Teguh Prakoso

<p class="Abstrak">Komunikasi mesin ke mesin (M2M) pada jaringan kapiler, menggunakan metode transmisi<em> Packet Reservation Multiple Access</em> (PRMA), dan struktur <em>frame</em> data<em> frame</em> biasa, serta skenario komunikasi<em> event driven</em>. Seiring dengan pertambahan perangkat, metode, struktur <em>frame</em> dan skenario komunikasi tersebut tidak dapat menangani laju data yang sangat banyak, sehingga terjadi kemacetan yang memperlambat komunikasi. Penelitian ini bertujuan membuat komunikasi M2M yang lancar walaupun perangkat bertambah banyak, dengan membuat struktur <em>frame </em>baru dan skenario komunikasi baru<em>, </em>berupa <em>Adaptive Poly Frame </em>(APF) serta <em>Scheduler Update </em>(SU). APF dan SU dirancang dengan memberikan nomor urut serta prioritas pada data, yang kemudian dioptimasi dengan meningkatkan peluang persaingan MK (O), jumlah siklus huni <em>slot</em><em> </em>(B), jumlah siklus huni kanal (S), dan Transmisi Sukses<em> </em>PRMA (TS<sub>PRMA</sub>). Penelitian ini menghasilkan transmisi sukses 92-28%, optimasi transmisi sukses 93-30%, siklus transmisi 1,5-8,1% dan reduksi siklus transmisi 0,9-7,2%.</p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstrak">Machine to machine (M2M) communication in capillary networks, using the Packet Reservation Multiple Access (PRMA) transmission method, and ordinary frame data frame structures, as well as event driven communication scenarios. Along with the addition of devices, methods, frame structures and communication scenarios cannot handle very large data rates, resulting congestion that results in inefficient communication. This research aims to make M2M communication efficient even though the device is multiplying, by creating new frame structures and new communication scenarios, in the form of Adaptive Poly Frame (APF) and Scheduler Update (SU). APF and SU are designed by sequence number and prioritizing data, which is then optimized by increase the chance of MK contestation (O), the number of slot occupancy cycles (B), the number of canal occupancy cycles (S) and PRMA Success Transmission (TS<sub>PRMA</sub>). This research resulted in 92-28% successful transmission, 93-30% successful transmission optimization, 1.5-8.1% transmission cycle and 0.9-7.2% transmission cycle reduction</p>


2019 ◽  
Vol 148 ◽  
pp. 162-174 ◽  
Author(s):  
Lin Ma ◽  
T. Aaron Gulliver ◽  
Anbang Zhao ◽  
Chunsha Ge ◽  
Xuejie Bi

Sign in / Sign up

Export Citation Format

Share Document