Composite Dual Window Scattering Correction Technique In PET

Author(s):  
Lingxiong Shao ◽  
J.S. Karp ◽  
R. Freifelder
2018 ◽  
Vol 170 ◽  
pp. 06006 ◽  
Author(s):  
David Tisseur ◽  
Navnina Bhatia ◽  
Nicolas Estre ◽  
Léonie Berge ◽  
Daniel Eck ◽  
...  

One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV – 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.


Atmosphere ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 111 ◽  
Author(s):  
Chul-Min Ko ◽  
Yeong Yun Jeong ◽  
Young-Mi Lee ◽  
Byung-Sik Kim

This study aimed to enhance the accuracy of extreme rainfall forecast, using a machine learning technique for forecasting hydrological impact. In this study, machine learning with XGBoost technique was applied for correcting the quantitative precipitation forecast (QPF) provided by the Korea Meteorological Administration (KMA) to develop a hydrological quantitative precipitation forecast (HQPF) for flood inundation modeling. The performance of machine learning techniques for HQPF production was evaluated with a focus on two cases: one for heavy rainfall events in Seoul and the other for heavy rainfall accompanied by Typhoon Kong-rey (1825). This study calculated the well-known statistical metrics to compare the error derived from QPF-based rainfall and HQPF-based rainfall against the observational data from the four sites. For the heavy rainfall case in Seoul, the mean absolute errors (MAE) of the four sites, i.e., Nowon, Jungnang, Dobong, and Gangnam, were 18.6 mm/3 h, 19.4 mm/3 h, 48.7 mm/3 h, and 19.1 mm/3 h for QPF and 13.6 mm/3 h, 14.2 mm/3 h, 33.3 mm/3 h, and 12.0 mm/3 h for HQPF, respectively. These results clearly indicate that the machine learning technique is able to improve the forecasting performance for localized rainfall. In addition, the HQPF-based rainfall shows better performance in capturing the peak rainfall amount and spatial pattern. Therefore, it is considered that the HQPF can be helpful to improve the accuracy of intense rainfall forecast, which is subsequently beneficial for forecasting floods and their hydrological impacts.


Geomatics ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 148-176
Author(s):  
Maan Khedr ◽  
Naser El-Sheimy

Mobile location-based services (MLBS) are attracting attention for their potential public and personal use for a variety of applications such as location-based advertisement, smart shopping, smart cities, health applications, emergency response, and even gaming. Many of these applications rely on Inertial Navigation Systems (INS) due to the degraded GNSS services indoors. INS-based MLBS using smartphones is hindered by the quality of the MEMS sensors provided in smartphones which suffer from high noise and errors resulting in high drift in the navigation solution rapidly. Pedestrian dead reckoning (PDR) is an INS-based navigation technique that exploits human motion to reduce navigation solution errors, but the errors cannot be eliminated without aid from other techniques. The purpose of this study is to enhance and extend the short-term reliability of PDR systems for smartphones as a standalone system through an enhanced step detection algorithm, a periodic attitude correction technique, and a novel PCA-based motion direction estimation technique. Testing shows that the developed system (S-PDR) provides a reliable short-term navigation solution with a final positioning error that is up to 6 m after 3 min runtime. These results were compared to a PDR solution using an Xsens IMU which is known to be a high grade MEMS IMU and was found to be worse than S-PDR. The findings show that S-PDR can be used to aid GNSS in challenging environments and can be a viable option for short-term indoor navigation until aiding is provided by alternative means. Furthermore, the extended reliable solution of S-PDR can help reduce the operational complexity of aiding navigation systems such as RF-based indoor navigation and magnetic map matching as it reduces the frequency by which these aiding techniques are required and applied.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2009
Author(s):  
Fatemeh Najafi ◽  
Masoud Kaveh ◽  
Diego Martín ◽  
Mohammad Reza Mosavi

Traditional authentication techniques, such as cryptographic solutions, are vulnerable to various attacks occurring on session keys and data. Physical unclonable functions (PUFs) such as dynamic random access memory (DRAM)-based PUFs are introduced as promising security blocks to enable cryptography and authentication services. However, PUFs are often sensitive to internal and external noises, which cause reliability issues. The requirement of additional robustness and reliability leads to the involvement of error-reduction methods such as error correction codes (ECCs) and pre-selection schemes that cause considerable extra overheads. In this paper, we propose deep PUF: a deep convolutional neural network (CNN)-based scheme using the latency-based DRAM PUFs without the need for any additional error correction technique. The proposed framework provides a higher number of challenge-response pairs (CRPs) by eliminating the pre-selection and filtering mechanisms. The entire complexity of device identification is moved to the server side that enables the authentication of resource-constrained nodes. The experimental results from a 1Gb DDR3 show that the responses under varying conditions can be classified with at least a 94.9% accuracy rate by using CNN. After applying the proposed authentication steps to the classification results, we show that the probability of identification error can be drastically reduced, which leads to a highly reliable authentication.


Author(s):  
Hongwei Liu ◽  
Rui Yang ◽  
Pingjiang Wang ◽  
Jihong Chen ◽  
Hua Xiang

The objective of this research is to develop a novel correction mechanism to reduce the fluctuation range of tools in numerical control (NC) machining. Error compensation is an effective method to improve the machining accuracy of a machine tool. If the difference between two adjacent compensation data is too large, the fluctuation range of the tool will increase, which will seriously affect the surface quality of the machined parts in mechanical machining. The methodology used in compensation data processing is a simplex method of linear programming. This method reduces the fluctuation range of the tool and optimizes the tool path. The important aspect of software error compensation is to modify the initial compensation data by using an iterative method, and then the corrected tool path data are converted into actual compensated NC codes by using a postprocessor, which is implemented on the compensation module to ensure a smooth running path of the tool. The generated, calibrated, and amended NC codes were immediately fed to the machine tool controller. This technique was verified by using repeated measurements. The results of the experiments demonstrate efficient compensation and significant improvement in the machining accuracy of the NC machine tool.


2021 ◽  
Vol 12 ◽  
pp. 215145932199274
Author(s):  
Victor Garcia-Martin ◽  
Ana Verdejo-González ◽  
David Ruiz-Picazo ◽  
José Ramírez-Villaescusa

Introduction: Physiological aging frequently leads to degenerative changes and spinal deformity. In patients with hypolordotic fusions or ankylosing illnesses such as diffuse idiopathic skeletal hyperostosis or ankylosing spondylitis, compensation mechanisms can be altered causing severe pain and disability. In addition, if a total hip replacement and/or knee replacement is performed, both pelvic and lower limbs compensation mechanisms could be damaged and prosthetic dislocation or impingement syndrome could be present. Pedicle subtraction osteotomy has proven to be the optimal correction technique for spinal deformation in patients suffering from a rigid spine. Case Presentation: A 70-year-old male patient with diffuse idiopathic skeletal hyperostosis criteria and a rigid lumbar kyphosis, who previously underwent a total hip and knee replacement, had severe disability. We then performed corrective surgery by doing a pedicle subtraction osteotomy. The procedure and outcomes are presented here. Conclusion: In symptomatic patients with sagittal imbalance and a rigid spine, pedicle subtraction osteotomy can indeed correct spinal deformity and re-establish sagittal balance.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Samuel Maddrell-Mander ◽  
Lakshan Ram Madhan Mohan ◽  
Alexander Marshall ◽  
Daniel O’Hanlon ◽  
Konstantinos Petridis ◽  
...  

AbstractThis paper presents the first study of Graphcore’s Intelligence Processing Unit (IPU) in the context of particle physics applications. The IPU is a new type of processor optimised for machine learning. Comparisons are made for neural-network-based event simulation, multiple-scattering correction, and flavour tagging, implemented on IPUs, GPUs and CPUs, using a variety of neural network architectures and hyperparameters. Additionally, a Kálmán filter for track reconstruction is implemented on IPUs and GPUs. The results indicate that IPUs hold considerable promise in addressing the rapidly increasing compute needs in particle physics.


2020 ◽  
Vol 12 (2) ◽  
pp. 234 ◽  
Author(s):  
Alexander Kokhanovsky ◽  
Jason E. Box ◽  
Baptiste Vandecrux ◽  
Kenneth D. Mankoff ◽  
Maxim Lamare ◽  
...  

We present a simplified atmospheric correction algorithm for snow/ice albedo retrievals using single view satellite measurements. The validation of the technique is performed using Ocean and Land Colour Instrument (OLCI) on board Copernicus Sentinel-3 satellite and ground spectral or broadband albedo measurements from locations on the Greenland ice sheet and in the French Alps. Through comparison with independent ground observations, the technique is shown to perform accurately in a range of conditions from a 2100 m elevation mid-latitude location in the French Alps to a network of 15 locations across a 2390 m elevation range in seven regions across the Greenland ice sheet. Retrieved broadband albedo is accurate within 5% over a wide (0.5) broadband albedo range of the (N = 4155) Greenland observations and with no apparent bias.


Sign in / Sign up

Export Citation Format

Share Document