scholarly journals The Online Reconstruction Software at the E1039/SpinQuest Experiment

2020 ◽  
Author(s):  
Catherine Ayuso
2015 ◽  
Vol 75 (2) ◽  
Author(s):  
Ho Wei Yong ◽  
Abdullah Bade ◽  
Rajesh Kumar Muniandy

Over the past thirty years, a number of researchers have investigated on 3D organ reconstruction from medical images and there are a few 3D reconstruction software available on the market. However, not many researcheshave focused on3D reconstruction of breast cancer’s tumours. Due to the method complexity, most 3D breast cancer’s tumours reconstruction were done based on MRI slices dataeven though mammogram is the current clinical practice for breast cancer screening. Therefore, this research will investigate the process of creating a method that will be able to reconstruct 3D breast cancer’s tumours from mammograms effectively.  Several steps were proposed for this research which includes data acquisition, volume reconstruction, andvolume rendering. The expected output from this research is the 3D breast cancer’s tumours model that is generated from correctly registered mammograms. The main purpose of this research is to come up with a 3D reconstruction method that can produce good breast cancer model from mammograms while using minimal computational cost.


2012 ◽  
Author(s):  
Daniele Trocino ◽  
CMS Collaboration

2003 ◽  
Vol 2003 (2) ◽  
pp. 1-4
Author(s):  
Cecile Duboz ◽  
Siew Ching Tan ◽  
Steve Quenette ◽  
Gordon S. Lister ◽  
Bill Appelbe

2020 ◽  
Vol 245 ◽  
pp. 01031
Author(s):  
Thiago Rafael Fernandez Perez Tomei

The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its second phase the LHC will reach a luminosity of 7.5 1034 cm−2 s−1 with a pileup of 200 collisions, producing integrated luminosity greater than 3000 fb−1 over the full experimental run. To fully exploit the higher luminosity, the CMS experiment will introduce a more advanced Level-1 Trigger and increase the full readout rate from 100 kHz to 750 kHz. CMS is designing an efficient data-processing hardware trigger that will include tracking information and high-granularity calorimeter information. The current Level-1 conceptual design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency system for large throughput and sophisticated data correlation across diverse sources. The higher luminosity, event complexity and input rate present an unprecedented challenge to the High Level Trigger that aims to achieve a similar efficiency and rejection factor as today despite the higher pileup and more pure preselection. In this presentation we will discuss the ongoing studies and prospects for the online reconstruction and selection algorithms for the high-luminosity era.


Author(s):  
K. Nakano ◽  
Y. Tanaka ◽  
H. Suzuki ◽  
K. Hayakawa ◽  
M. Kurodai

Abstract. Unmanned aerial vehicles (UAVs) equipped with image sensors, which have been widely used in various fields such as construction, agriculture, and disaster management, can obtain images at the millimeter to decimeter scale. Useful tools that produce realistic surface models using 3D reconstruction software based on computer vision technologies are generally used to produce datasets from acquired images using UAVs. However, it is difficult to obtain the feature points from surfaces with limited texture, such as new asphalt or concrete, or detect the ground in areas such as forests, which are commonly concealed by vegetation. A promising method to address such issues is the use of UAV-equipped laser scanners. Recently, low and high performance products that use direct georeferencing devices integrated with laser scanners have been available. Moreover, there have been numerous reports regarding the various applications of UAVs equipped with laser scanners; however, these reports only discuss UAVs as measuring devices. Therefore, to understand the functioning of UAVs equipped with laser scanners, we investigated the theoretical accuracy of the survey grade laser scanner unit from the viewpoint of photogrammetry. We evaluated the performance of the VUX-1HA laser scanner equipped on a Skymatix X-LS1 UAV at a construction site. We presented the theoretical values obtained using the observation equations and results of the accuracy aspects of the acquired data in terms of height.


2021 ◽  
Author(s):  
Manuel Chevalier

Abstract. Statistical climate reconstruction techniques are practical tools to study past climate variability from fossil proxy data. In particular, the methods based on probability density functions (PDFs) are powerful at producing robust results from various environments and proxies. However, accessing and curating the necessary calibration data, as well as the complexity of interpreting probabilistic results, often limit their use in palaeoclimatological studies. To address these problems, I present a new R package (crestr) to apply the CREST method (Climate REconstruction SofTware) on diverse palaeoecological datasets. crestr includes a globally curated calibration dataset for six common climate proxies (i.e. plants, beetles, chironomids, rodents, foraminifera, and dinoflagellate cysts) that enables its use in most terrestrial and marine regions. The package can also be used with private data collections instead of, or in combination with, the provided dataset. It also includes a suite of graphical diagnostic tools to represent the data at each step of the reconstruction process and provide insights into the effect of the different modelling assumptions and external factors that underlie a reconstruction. With this R package, the CREST method can now be used in a scriptable environment, thus simplifying its use and integration in existing workflows. It is hoped that crestr will contribute to producing the much-needed quantified records from the many regions where climate reconstructions are currently lacking, despite the existence of suitable fossil records.


Author(s):  
D. Pagliari ◽  
F. Menna ◽  
R. Roncella ◽  
F. Remondino ◽  
L. Pinto

Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.


Sign in / Sign up

Export Citation Format

Share Document