scholarly journals Multi-Source Tri-Training Transfer Learning

2014 ◽  
Vol E97.D (6) ◽  
pp. 1668-1672 ◽  
Author(s):  
Yuhu CHENG ◽  
Xuesong WANG ◽  
Ge CAO
2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi134-vi134
Author(s):  
Jacob Ellison ◽  
Francesco Caliva ◽  
Pablo Damasceno ◽  
Tracy Luks ◽  
Marisa LaFontaine ◽  
...  

Abstract Although current advances for automated glioma lesion segmentation and volumetric measurements using deep learning have yielded high performance on newly-diagnosed patients, response assessment in neuro-oncology still relies on manually-drawn, cross-sectional areas of the tumor because these models do not generalize to patients in the post-treatment setting, where they are most needed in the clinic. Surgical resections, adjuvant treatment, or disease progression can alter the characteristics of these lesions on T2-weighted imaging, causing measures of segmentation accuracy, typically measured by Dice coefficients of overlap (DCs), to drop by ~15%. To improve the generalizability of T2-lesion segmentation to patients with glioma post-treatment, we evaluated the effects of: 1) training with different proportions of newly-diagnosed and treated gliomas, 2) applying transfer learning from pre- to post-treatment domains, and 3) incorporating a loss term that spatially weights the lesion boundaries with greater emphasis in training. Using 425 patients (208 newly-diagnosed, 217 post-Tx, with 25 treated patients withheld as a test set) and a top-performing model previously trained on newly-diagnosed gliomas, we found that DCs increased by 10% (to 0.84) then plateaued after including ~25% of post-treatment patients in training. Transfer learning (pre-training on newly-diagnosed and finetuning with post-treatment data) significantly improved Hausdorf distances (HDs), a measure more sensitive to changes at the lesion boundaries, by 17% after including 26% post-treatment images in training, while DCs remained similar. Although modifying our loss functions with boundary-weighted penalizations resulted in comparable DCs to using standard DC loss, HD measures were further reduced by 26%, suggesting that HDs may be a more sensitive metric to subtle changes in segmentation accuracy than DCs. Current work is evaluating their utility in providing accurate volumes for real-time response assessment in the clinic using workflows that have recently been deployed on our clinical PACs system.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2320 ◽  
Author(s):  
Ender Çetin ◽  
Cristina Barrado ◽  
Enric Pastor

Counter-drone technology by using artificial intelligence (AI) is an emerging technology and it is rapidly developing. Considering the recent advances in AI, counter-drone systems with AI can be very accurate and efficient to fight against drones. The time required to engage with the target can be less than other methods based on human intervention, such as bringing down a malicious drone by a machine-gun. Also, AI can identify and classify the target with a high precision in order to prevent a false interdiction with the targeted object. We believe that counter-drone technology with AI will bring important advantages to the threats coming from some drones and will help the skies to become safer and more secure. In this study, a deep reinforcement learning (DRL) architecture is proposed to counter a drone with another drone, the learning drone, which will autonomously avoid all kind of obstacles inside a suburban neighborhood environment. The environment in a simulator that has stationary obstacles such as trees, cables, parked cars, and houses. In addition, another non-malicious third drone, acting as moving obstacle inside the environment was also included. In this way, the learning drone is trained to detect stationary and moving obstacles, and to counter and catch the target drone without crashing with any other obstacle inside the neighborhood. The learning drone has a front camera and it can capture continuously depth images. Every depth image is part of the state used in DRL architecture. There are also scalar state parameters such as velocities, distances to the target, distances to some defined geofences and track, and elevation angles. The state image and scalars are processed by a neural network that joints the two state parts into a unique flow. Moreover, transfer learning is tested by using the weights of the first full-trained model. With transfer learning, one of the best jump-starts achieved higher mean rewards (close to 35 more) at the beginning of training. Transfer learning also shows that the number of crashes during training can be reduced, with a total number of crashed episodes reduced by 65%, when all ground obstacles are included.


2012 ◽  
Author(s):  
Ramon D. Wenzel ◽  
John Cordery
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document