scholarly journals Dynamic PET Imaging Using Dual Texture Features

2022 ◽  
Vol 15 ◽  
Author(s):  
Zhanglei Ouyang ◽  
Shujun Zhao ◽  
Zhaoping Cheng ◽  
Yanhua Duan ◽  
Zixiang Chen ◽  
...  

Purpose: This study aims to explore the impact of adding texture features in dynamic positron emission tomography (PET) reconstruction of imaging results.Methods: We have improved a reconstruction method that combines radiological dual texture features. In this method, multiple short time frames are added to obtain composite frames, and the image reconstructed by composite frames is used as the prior image. We extract texture features from prior images by using the gray level-gradient cooccurrence matrix (GGCM) and gray-level run length matrix (GLRLM). The prior information contains the intensity of the prior image, the inverse difference moment of the GGCM and the long-run low gray-level emphasis of the GLRLM.Results: The computer simulation results show that, compared with the traditional maximum likelihood, the proposed method obtains a higher signal-to-noise ratio (SNR) in the image obtained by dynamic PET reconstruction. Compared with similar methods, the proposed algorithm has a better normalized mean squared error (NMSE) and contrast recovery coefficient (CRC) at the tumor in the reconstructed image. Simulation studies on clinical patient images show that this method is also more accurate for reconstructing high-uptake lesions.Conclusion: By adding texture features to dynamic PET reconstruction, the reconstructed images are more accurate at the tumor.

2021 ◽  
Vol 13 (14) ◽  
pp. 2828
Author(s):  
Yao Xiao ◽  
Wei Zhao ◽  
Mingguo Ma ◽  
Kunlong He

Land surface temperature (LST) is a crucial input parameter in the study of land surface water and energy budgets at local and global scales. Because of cloud obstruction, there are many gaps in thermal infrared remote sensing LST products. To fill these gaps, an improved LST reconstruction method for cloud-covered pixels was proposed by building a linking model for the moderate resolution imaging spectroradiometer (MODIS) LST with other surface variables with a random forest regression method. The accumulated solar radiation from sunrise to satellite overpass collected from the surface solar irradiance product of the Feng Yun-4A geostationary satellite was used to represent the impact of cloud cover on LST. With the proposed method, time-series gap-free LST products were generated for Chongqing City as an example. The visual assessment indicated that the reconstructed gap-free LST images can sufficiently capture the LST spatial pattern associated with surface topography and land cover conditions. Additionally, the validation with in situ observations revealed that the reconstructed cloud-covered LSTs have similar performance as the LSTs on clear-sky days, with the correlation coefficients of 0.92 and 0.89, respectively. The unbiased root mean squared error was 2.63 K. In general, the validation work confirmed the good performance of this approach and its good potential for regional application.


Animals ◽  
2022 ◽  
Vol 12 (2) ◽  
pp. 195
Author(s):  
Małgorzata Domino ◽  
Marta Borowska ◽  
Anna Trojakowska ◽  
Natalia Kozłowska ◽  
Łukasz Zdrojkowski ◽  
...  

Appropriate matching of rider–horse sizes is becoming an increasingly important issue of riding horses’ care, as the human population becomes heavier. Recently, infrared thermography (IRT) was considered to be effective in differing the effect of 10.6% and 21.3% of the rider:horse bodyweight ratio, but not 10.1% and 15.3%. As IRT images contain many pixels reflecting the complexity of the body’s surface, the pixel relations were assessed by image texture analysis using histogram statistics (HS), gray-level run-length matrix (GLRLM), and gray level co-occurrence matrix (GLCM) approaches. The study aimed to determine differences in texture features of thermal images under the impact of 10–12%, >12 ≤15%, >15 <18% rider:horse bodyweight ratios, respectively. Twelve horses were ridden by each of six riders assigned to light (L), moderate (M), and heavy (H) groups. Thermal images were taken pre- and post-standard exercise and underwent conventional and texture analysis. Texture analysis required image decomposition into red, green, and blue components. Among 372 returned features, 95 HS features, 48 GLRLM features, and 96 GLCH features differed dependent on exercise; whereas 29 HS features, 16 GLRLM features, and 30 GLCH features differed dependent on bodyweight ratio. Contrary to conventional thermal features, the texture heterogeneity measures, InvDefMom, SumEntrp, Entropy, DifVarnc, and DifEntrp, expressed consistent measurable differences when the red component was considered.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


2021 ◽  
Vol 13 (8) ◽  
pp. 1485
Author(s):  
Naveen Ramachandran ◽  
Sassan Saatchi ◽  
Stefano Tebaldini ◽  
Mauro Mariotti d’Alessandro ◽  
Onkar Dikshit

Low-frequency tomographic synthetic aperture radar (TomoSAR) techniques provide an opportunity for quantifying the dynamics of dense tropical forest vertical structures. Here, we compare the performance of different TomoSAR processing, Back-projection (BP), Capon beamforming (CB), and MUltiple SIgnal Classification (MUSIC), and compensation techniques for estimating forest height (FH) and forest vertical profile from the backscattered echoes. The study also examines how polarimetric measurements in linear, compact, hybrid, and dual circular modes influence parameter estimation. The tomographic analysis was carried out using P-band data acquired over the Paracou study site in French Guiana, and the quantitative evaluation was performed using LiDAR-based canopy height measurements taken during the 2009 TropiSAR campaign. Our results show that the relative root mean squared error (RMSE) of height was less than 10%, with negligible systematic errors across the range, with Capon and MUSIC performing better for height estimates. Radiometric compensation, such as slope correction, does not improve tree height estimation. Further, we compare and analyze the impact of the compensation approach on forest vertical profiles and tomographic metrics and the integrated backscattered power. It is observed that radiometric compensation increases the backscatter values of the vertical profile with a slight shift in local maxima of the canopy layer for both the Capon and the MUSIC estimators. Our results suggest that applying the proper processing and compensation techniques on P-band TomoSAR observations from space will allow the monitoring of forest vertical structure and biomass dynamics.


2018 ◽  
Vol 127 ◽  
pp. S279-S280 ◽  
Author(s):  
P. Brynolfsson ◽  
T. Löfstedt ◽  
T. Asklund ◽  
T. Nyholm ◽  
A. Garpebring
Keyword(s):  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Steve Kanters ◽  
Mohammad Ehsanul Karim ◽  
Kristian Thorlund ◽  
Aslam Anis ◽  
Nick Bansback

Abstract Background The use of individual patient data (IPD) in network meta-analyses (NMA) is rapidly growing. This study aimed to determine, through simulations, the impact of select factors on the validity and precision of NMA estimates when combining IPD and aggregate data (AgD) relative to using AgD only. Methods Three analysis strategies were compared via simulations: 1) AgD NMA without adjustments (AgD-NMA); 2) AgD NMA with meta-regression (AgD-NMA-MR); and 3) IPD-AgD NMA with meta-regression (IPD-NMA). We compared 108 parameter permutations: number of network nodes (3, 5 or 10); proportion of treatment comparisons informed by IPD (low, medium or high); equal size trials (2-armed with 200 patients per arm) or larger IPD trials (500 patients per arm); sparse or well-populated networks; and type of effect-modification (none, constant across treatment comparisons, or exchangeable). Data were generated over 200 simulations for each combination of parameters, each using linear regression with Normal distributions. To assess model performance and estimate validity, the mean squared error (MSE) and bias of treatment-effect and covariate estimates were collected. Standard errors (SE) and percentiles were used to compare estimate precision. Results Overall, IPD-NMA performed best in terms of validity and precision. The median MSE was lower in the IPD-NMA in 88 of 108 scenarios (similar results otherwise). On average, the IPD-NMA median MSE was 0.54 times the median using AgD-NMA-MR. Similarly, the SEs of the IPD-NMA treatment-effect estimates were 1/5 the size of AgD-NMA-MR SEs. The magnitude of superior validity and precision of using IPD-NMA varied across scenarios and was associated with the amount of IPD. Using IPD in small or sparse networks consistently led to improved validity and precision; however, in large/dense networks IPD tended to have negligible impact if too few IPD were included. Similar results also apply to the meta-regression coefficient estimates. Conclusions Our simulation study suggests that the use of IPD in NMA will considerably improve the validity and precision of estimates of treatment effect and regression coefficients in the most NMA IPD data-scenarios. However, IPD may not add meaningful validity and precision to NMAs of large and dense treatment networks when negligible IPD are used.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Rajesh Kumar ◽  
Rajeev Srivastava ◽  
Subodh Srivastava

A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.


2021 ◽  
Author(s):  
Sascha Flaig ◽  
Timothy Praditia ◽  
Alexander Kissinger ◽  
Ulrich Lang ◽  
Sergey Oladyshkin ◽  
...  

&lt;p&gt;In order to prevent possible negative impacts of water abstraction in an ecologically sensitive moor south of Munich (Germany), a &amp;#8220;predictive control&amp;#8221; scheme is in place. We design an artificial neural network (ANN) to provide predictions of moor water levels and to separate hydrological from anthropogenic effects. As the moor is a dynamic system, we adopt the &amp;#8222;Long short-term memory&amp;#8220; architecture.&lt;/p&gt;&lt;p&gt;To find the best LSTM setup, we train, test and compare LSTMs with two different structures: (1) the non-recurrent one-to-one structure, where the series of inputs are accumulated and fed into the LSTM; and (2) the recurrent many-to-many structure, where inputs gradually enter the LSTM (including LSTM forecasts from previous forecast time steps). The outputs of our LSTMs then feed into a readout layer that converts the hidden states into water level predictions. We hypothesize that the recurrent structure is the better structure because it better resembles the typical structure of differential equations for dynamic systems, as they would usually be used for hydro(geo)logical systems. We evaluate the comparison with the mean squared error as test metric, and conclude that the recurrent many-to-many LSTM performs better for the analyzed complex situations. It also produces plausible predictions with reasonable accuracy for seven days prediction horizon.&lt;/p&gt;&lt;p&gt;Furthermore, we analyze the impact of preprocessing meteorological data to evapotranspiration data using typical ETA models. Inserting knowledge into the LSTM in the form of ETA models (rather than implicitly having the LSTM learn the ETA relations) leads to superior prediction results. This finding aligns well with current ideas on physically-inspired machine learning.&lt;/p&gt;&lt;p&gt;As an additional validation step, we investigate whether our ANN is able to correctly identify both anthropogenic and natural influences and their interaction. To this end, we investigate two comparable pumping events under different meteorological conditions. Results indicate that all individual and combined influences of input parameters on water levels can be represented well. The neural networks recognize correctly that the predominant precipitation and lower evapotranspiration during one pumping event leads to a lower decrease of the hydrograph.&lt;/p&gt;&lt;p&gt;To further demonstrate the capability of the trained neural network, scenarios of pumping events are created and simulated.&lt;/p&gt;&lt;p&gt;In conclusion, we show that more robust and accurate predictions of moor water levels can be obtained if available physical knowledge of the modeled system is used to design and train the neural network. The artificial neural network can be a useful instrument to assess the impact of water abstraction by quantifying the anthropogenic influence.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document