Dose image prediction for range and width verifications from carbon ion‐induced secondary electron bremsstrahlung x‐rays using deep learning workflow

2020 ◽  
Vol 47 (8) ◽  
pp. 3520-3532
Author(s):  
Mitsutaka Yamaguchi ◽  
Chih‐Chieh Liu ◽  
Hsuan‐Ming Huang ◽  
Takuya Yabe ◽  
Takashi Akagi ◽  
...  
2018 ◽  
Vol 63 (4) ◽  
pp. 045016 ◽  
Author(s):  
Mitsutaka Yamaguchi ◽  
Yuto Nagao ◽  
Koki Ando ◽  
Seiichi Yamamoto ◽  
Makoto Sakai ◽  
...  

EBioMedicine ◽  
2021 ◽  
Vol 70 ◽  
pp. 103517
Author(s):  
Vineet K. Raghu ◽  
Michael T. Lu
Keyword(s):  
X Rays ◽  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Makoto Nishimori ◽  
Kunihiko Kiuchi ◽  
Kunihiro Nishimura ◽  
Kengo Kusano ◽  
Akihiro Yoshida ◽  
...  

AbstractCardiac accessory pathways (APs) in Wolff–Parkinson–White (WPW) syndrome are conventionally diagnosed with decision tree algorithms; however, there are problems with clinical usage. We assessed the efficacy of the artificial intelligence model using electrocardiography (ECG) and chest X-rays to identify the location of APs. We retrospectively used ECG and chest X-rays to analyse 206 patients with WPW syndrome. Each AP location was defined by an electrophysiological study and divided into four classifications. We developed a deep learning model to classify AP locations and compared the accuracy with that of conventional algorithms. Moreover, 1519 chest X-ray samples from other datasets were used for prior learning, and the combined chest X-ray image and ECG data were put into the previous model to evaluate whether the accuracy improved. The convolutional neural network (CNN) model using ECG data was significantly more accurate than the conventional tree algorithm. In the multimodal model, which implemented input from the combined ECG and chest X-ray data, the accuracy was significantly improved. Deep learning with a combination of ECG and chest X-ray data could effectively identify the AP location, which may be a novel deep learning model for a multimodal model.


2021 ◽  
Vol 7 (7) ◽  
pp. 105
Author(s):  
Guillaume Reichert ◽  
Ali Bellamine ◽  
Matthieu Fontaine ◽  
Beatrice Naipeanu ◽  
Adrien Altar ◽  
...  

The growing need for emergency imaging has greatly increased the number of conventional X-rays, particularly for traumatic injury. Deep learning (DL) algorithms could improve fracture screening by radiologists and emergency room (ER) physicians. We used an algorithm developed for the detection of appendicular skeleton fractures and evaluated its performance for detecting traumatic fractures on conventional X-rays in the ER, without the need for training on local data. This algorithm was tested on all patients (N = 125) consulting at the Louis Mourier ER in May 2019 for limb trauma. Patients were selected by two emergency physicians from the clinical database used in the ER. Their X-rays were exported and analyzed by a radiologist. The prediction made by the algorithm and the annotation made by the radiologist were compared. For the 125 patients included, 25 patients with a fracture were identified by the clinicians, 24 of whom were identified by the algorithm (sensitivity of 96%). The algorithm incorrectly predicted a fracture in 14 of the 100 patients without fractures (specificity of 86%). The negative predictive value was 98.85%. This study shows that DL algorithms are potentially valuable diagnostic tools for detecting fractures in the ER and could be used in the training of junior radiologists.


2021 ◽  
Vol 11 (15) ◽  
pp. 6976
Author(s):  
Miroslav Jaščur ◽  
Marek Bundzel ◽  
Marek Malík ◽  
Anton Dzian ◽  
Norbert Ferenčík ◽  
...  

Certain post-thoracic surgery complications are monitored in a standard manner using methods that employ ionising radiation. A need to automatise the diagnostic procedure has now arisen following the clinical trial of a novel lung ultrasound examination procedure that can replace X-rays. Deep learning was used as a powerful tool for lung ultrasound analysis. We present a novel deep-learning method, automated M-mode classification, to detect the absence of lung sliding motion in lung ultrasound. Automated M-mode classification leverages semantic segmentation to select 2D slices across the temporal dimension of the video recording. These 2D slices are the input for a convolutional neural network, and the output of the neural network indicates the presence or absence of lung sliding in the given time slot. We aggregate the partial predictions over the entire video recording to determine whether the subject has developed post-surgery complications. With a 64-frame version of this architecture, we detected lung sliding on average with a balanced accuracy of 89%, sensitivity of 82%, and specificity of 92%. Automated M-mode classification is suitable for lung sliding detection from clinical lung ultrasound videos. Furthermore, in lung ultrasound videos, we recommend using time windows between 0.53 and 2.13 s for the classification of lung sliding motion followed by aggregation.


Sign in / Sign up

Export Citation Format

Share Document