raw data
Recently Published Documents


TOTAL DOCUMENTS

1964
(FIVE YEARS 690)

H-INDEX

44
(FIVE YEARS 9)

2022 ◽  
Author(s):  
Zhaohua Li ◽  
Le Wang ◽  
Guangyao Chen ◽  
Muhammad Shafq ◽  
zhaoquan Gu

In order to preserve data privacy while fully utilizing data from different owners, federated learning is believed to be a promising approach in recent years. However, aiming at federated learning in the image domain, gradient inversion techniques can reconstruct the input images on pixel-level only by leaked gradients, without accessing the raw data, which makes federated learning vulnerable to the attacks. In this paper, we review the latest advances of image gradient inversion techniques and evaluate the impact of them to federated learning from the attack perspective. We use eight models and four datasets to evaluate the current gradient inversion techniques, comparing the attack performance as well as the time consumption. Furthermore, we shed light on some important and interesting directions of gradient inversion against federated learning.<br>


2022 ◽  
Author(s):  
Zhaohua Li ◽  
Le Wang ◽  
Guangyao Chen ◽  
Muhammad Shafq ◽  
zhaoquan Gu

In order to preserve data privacy while fully utilizing data from different owners, federated learning is believed to be a promising approach in recent years. However, aiming at federated learning in the image domain, gradient inversion techniques can reconstruct the input images on pixel-level only by leaked gradients, without accessing the raw data, which makes federated learning vulnerable to the attacks. In this paper, we review the latest advances of image gradient inversion techniques and evaluate the impact of them to federated learning from the attack perspective. We use eight models and four datasets to evaluate the current gradient inversion techniques, comparing the attack performance as well as the time consumption. Furthermore, we shed light on some important and interesting directions of gradient inversion against federated learning.<br>


Author(s):  
Manuel Rodrigues ◽  
Gilles Metris ◽  
Judicael Bedouet ◽  
Joel Bergé ◽  
Patrice Carle ◽  
...  

Abstract Testing the Weak Equivalence Principle (WEP) to a precision of 10-15 requires a quantity of data that give enough confidence on the final result: ideally, the longer the measurement the better the rejection of the statistical noise. The science sessions had a duration of 120 orbits maximum and were regularly repeated and spaced out to accommodate operational constraints but also in order to repeat the experiment in different conditions and to allow time to calibrate the instrument. Several science sessions were performed over the 2.5 year duration of the experiment. This paper aims to describe how the data have been produced on the basis of a mission scenario and a data flow process, driven by a tradeoff between the science objectives and the operational constraints. The mission was led by the Centre National d’Etudes Spatiales (CNES) which provided the satellite, the launch and the ground operations. The ground segment was distributed between CNES and Office National d’Etudes et de Recherches Aerospatiales (ONERA). CNES provided the raw data through the Centre d’Expertise de Compensation de Trainee (CECT: Drag-free expertise centre). The science was led by the Observatoire de la Coote d’Azur (OCA) and ONERA was in charge of the data process. The latter also provided the instrument and the Science Mission Centre of MICROSCOPE (CMSM).


2022 ◽  
Author(s):  
Jonathan Schonfeld

Abstract Using publicly available video of a diffusion cloud chamber with a very smallradioactive source, I measure the spatial distribution of where tracks start, and consider possibleimplications. This is directly relevant to the quantum measurement problem and its possibleresolution, and appears never to have been done before. The raw data are relatively uncontrolled,leading to caveats that should guide future, more tailored experiments. Results may suggest amodification to Born’s rule at very small wavefunction, with possibly profound implications forthe detection of extremely rare events such as proton decay. I introduce two candidate smallwavefunctionBorn rule modifications, a hard cutoff and an offset model; the data may favor theoffset model, which has a stronger underlying physical rationale. Track distributions from decaysin cloud chambers represent a previously unappreciated way to probe the foundations of quantummechanics, and a novel case of wavefunctions with macroscopic signatures.


2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Marco Rossi ◽  
Sofia Vallecorsa

AbstractIn this work, we investigate different machine learning-based strategies for denoising raw simulation data from the ProtoDUNE experiment. The ProtoDUNE detector is hosted by CERN and it aims to test and calibrate the technologies for DUNE, a forthcoming experiment in neutrino physics. The reconstruction workchain consists of converting digital detector signals into physical high-level quantities. We address the first step in reconstruction, namely raw data denoising, leveraging deep learning algorithms. We design two architectures based on graph neural networks, aiming to enhance the receptive field of basic convolutional neural networks. We benchmark this approach against traditional algorithms implemented by the DUNE collaboration. We test the capabilities of graph neural network hardware accelerator setups to speed up training and inference processes.


Author(s):  
Fuminari Tatsugami ◽  
Toru Higaki ◽  
Yuko Nakamura ◽  
Yukiko Honda ◽  
Kazuo Awai

AbstractDual-energy CT, the object is scanned at two different energies, makes it possible to identify the characteristics of materials that cannot be evaluated on conventional single-energy CT images. This imaging method can be used to perform material decomposition based on differences in the material-attenuation coefficients at different energies. Dual-energy analyses can be classified as image data-based- and raw data-based analysis. The beam-hardening effect is lower with raw data-based analysis, resulting in more accurate dual-energy analysis. On virtual monochromatic images, the iodine contrast increases as the energy level decreases; this improves visualization of contrast-enhanced lesions. Also, the application of material decomposition, such as iodine- and edema images, increases the detectability of lesions due to diseases encountered in daily clinical practice. In this review, the minimal essentials of dual-energy CT scanning are presented and its usefulness in daily clinical practice is discussed.


Author(s):  
Jorge Hirsch

In arXiv:2111.15017v1 [1], Dias and Salamat posted some of the measured data for ac magnetic susceptibility of carbonaceous sulfur hydride, a material that was reported in Nature 586, 373 (2020) [2] to be a room temperature superconductor. They provided additional measured data in arXiv:2111.15017v2 [3]. Here I provide an analysis of these data. The results of this analysis indicate that the claim of ref. [2] that magnetic susceptibility measurements support the conclusion that the material is a room temperature superconductor is not supported by valid underlying data.


2022 ◽  
pp. 63-81
Author(s):  
Chau H. P. Nguyen ◽  
Howard J. Curzer

This chapter aims to extend the current body of knowledge about phenomenological research methodologies. By focusing exclusively on the Husserlian-oriented descriptive phenomenological methodology, (1) the authors will first provide a brief introduction to Husserl's phenomenology. (2) They will then give a thorough delineation of Giorgi's descriptive phenomenological psychological methodology, which is underpinned by Husserl's phenomenological philosophy. They will subsequently describe in detail methods of data gathering and the method of data analysis of this phenomenological methodology. (3) Finally, they will borrow raw data from published empirical research to demonstrate the application of this data analysis method.


2022 ◽  
Vol 17 (01) ◽  
pp. C01039
Author(s):  
S. Miryala ◽  
S. Mittal ◽  
Y. Ren ◽  
G. Carini ◽  
G. Deptuch ◽  
...  

Abstract In a multi-channel radiation detector readout system, waveform sampling, digitization, and raw data transmission to the data acquisition system constitute a conventional processing chain. The deposited energy on the sensor is estimated by extracting peak amplitudes, area under pulse envelopes from the raw data, and starting times of signals or time of arrivals. However, such quantities can be estimated using machine learning algorithms on the front-end Application-Specific Integrated Circuits (ASICs), often termed as “edge computing”. Edge computation offers enormous benefits, especially when the analytical forms are not fully known or the registered waveform suffers from noise and imperfections of practical implementations. In this work, we aim to predict peak amplitude from a single waveform snippet whose rising and falling edges containing only 3 to 4 samples. We thoroughly studied two well-accepted neural network algorithms, Multi-Layer Perceptron (MLP) and Convolutional Neural Network (CNN) by varying their model sizes. To better fit front-end electronics, neural network model reduction techniques, such as network pruning methods and variable-bit quantization approaches, were also studied. By combining pruning and quantization, our best performing model has the size of 1.5 KB, reduced from 16.6 KB of its full model counterpart. It can reach mean absolute error of 0.034 comparing to that of a naive baseline of 0.135. Such parameter-efficient and predictive neural network models established feasibility and practicality of their deployment on front-end ASICs.


Sign in / Sign up

Export Citation Format

Share Document