Automatic Segmentation of Thigh Muscle in Longitudinal 3D T1-Weighted Magnetic Resonance (MR) Images

Author(s):  
Zihao Tang ◽  
Chenyu Wang ◽  
Phu Hoang ◽  
Sidong Liu ◽  
Weidong Cai ◽  
...  
2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Victoria Sherwood ◽  
John Civale ◽  
Ian Rivens ◽  
David J. Collins ◽  
Martin O. Leach ◽  
...  

A system which allows magnetic resonance (MR) and ultrasound (US) image data to be acquired simultaneously has been developed. B-mode and Doppler US were performed inside the bore of a clinical 1.5 T MRI scanner using a clinical 1–4 MHz US transducer with an 8-metre cable. Susceptibility artefacts and RF noise were introduced into MR images by the US imaging system. RF noise was minimised by using aluminium foil to shield the transducer. A study of MR and B-mode US image signal-to-noise ratio (SNR) as a function of transducer-phantom separation was performed using a gel phantom. This revealed that a 4 cm separation between the phantom surface and the transducer was sufficient to minimise the effect of the susceptibility artefact in MR images. MR-US imaging was demonstratedin vivowith the aid of a 2 mm VeroWhite 3D-printed spherical target placed over the thigh muscle of a rat. The target allowed single-point registration of MR and US images in the axial plane to be performed. The system was subsequently demonstrated as a tool for the targeting and visualisation of high intensity focused ultrasound exposure in the rat thigh muscle.


2021 ◽  
pp. 20210185
Author(s):  
Michihito Nozawa ◽  
Hirokazu Ito ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Chinami Igarashi ◽  
...  

Objectives: The aims of the present study were to construct a deep learning model for automatic segmentation of the temporomandibular joint (TMJ) disc on magnetic resonance (MR) images, and to evaluate the performances using the internal and external test data. Methods: In total, 1200 MR images of closed and open mouth positions in patients with temporomandibular disorder (TMD) were collected from two hospitals (Hospitals A and B). The training and validation data comprised 1000 images from Hospital A, which were used to create a segmentation model. The performance was evaluated using 200 images from Hospital A (internal validity test) and 200 images from Hospital B (external validity test). Results: Although the analysis of performance determined with data from Hospital B showed low recall (sensitivity), compared with the performance determined with data from Hospital A, both performances were above 80%. Precision (positive predictive value) was lower when test data from Hospital A were used for the position of anterior disc displacement. According to the intra-articular TMD classification, the proportions of accurately assigned TMJs were higher when using images from Hospital A than when using images from Hospital B. Conclusion: The segmentation deep learning model created in this study may be useful for identifying disc positions on MR images.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 21323-21331
Author(s):  
Mingwei Cai ◽  
Jiazhou Wang ◽  
Qing Yang ◽  
Ying Guo ◽  
Zhen Zhang ◽  
...  

2021 ◽  
Vol 7 (8) ◽  
pp. 133
Author(s):  
Jonas Denck ◽  
Jens Guehring ◽  
Andreas Maier ◽  
Eva Rothgang

A magnetic resonance imaging (MRI) exam typically consists of the acquisition of multiple MR pulse sequences, which are required for a reliable diagnosis. With the rise of generative deep learning models, approaches for the synthesis of MR images are developed to either synthesize additional MR contrasts, generate synthetic data, or augment existing data for AI training. While current generative approaches allow only the synthesis of specific sets of MR contrasts, we developed a method to generate synthetic MR images with adjustable image contrast. Therefore, we trained a generative adversarial network (GAN) with a separate auxiliary classifier (AC) network to generate synthetic MR knee images conditioned on various acquisition parameters (repetition time, echo time, and image orientation). The AC determined the repetition time with a mean absolute error (MAE) of 239.6 ms, the echo time with an MAE of 1.6 ms, and the image orientation with an accuracy of 100%. Therefore, it can properly condition the generator network during training. Moreover, in a visual Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable. This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Soo Hyun Park ◽  
Sang Ha Noh ◽  
Michael J. McCarthy ◽  
Seong Min Kim

AbstractThis study was carried out to develop a prediction model for soluble solid content (SSC) of intact chestnut and to detect internal defects using nuclear magnetic resonance (NMR) relaxometry and magnetic resonance imaging (MRI). Inversion recovery and Carr–Purcell–Meiboom–Gill (CPMG) pulse sequences used to determine the longitudinal (T1) and transverse (T2) relaxation times, respectively. Partial least squares regression (PLSR) was adopted to predict SSCs of chestnuts with NMR data and histograms from MR images. The coefficient of determination (R2), root mean square error of prediction (RMSEP), ratio of prediction to deviation (RPD), and the ratio of error range (RER) of the optimized model to predict SSC were 0.77, 1.41 °Brix, 1.86, and 11.31 with a validation set. Furthermore, an image-processing algorithm has been developed to detect internal defects such as decay, mold, and cavity using MR images. The classification applied with the developed image processing algorithm was over 94% accurate to classify. Based on the results obtained, it was determined that the NMR signal could be applied for grading several levels by SSC, and MRI could be used to evaluate the internal qualities of chestnuts.


Sign in / Sign up

Export Citation Format

Share Document