Single‐Shot Interaction and Synchronization of Random Microcavity Lasers

2021 ◽  
pp. 2100562
Author(s):  
Hongyang Zhu ◽  
Weili Zhang ◽  
Jinchuan Zhang ◽  
Rui Ma ◽  
Zhao Wang ◽  
...  
2004 ◽  
pp. 373-380 ◽  
Author(s):  
Timothy D. Solberg ◽  
Steven J. Goetsch ◽  
Michael T. Selch ◽  
William Melega ◽  
Goran Lacan ◽  
...  

Object. The purpose of this work was to investigate the targeting and dosimetric characteristics of a linear accelerator (LINAC) system dedicated for stereotactic radiosurgery compared with those of a commercial gamma knife (GK) unit. Methods. A phantom was rigidly affixed within a Leksell stereotactic frame and axial computerized tomography scans were obtained using an appropriate stereotactic localization device. Treatment plans were performed, film was inserted into a recessed area, and the phantom was positioned and treated according to each treatment plan. In the case of the LINAC system, four 140° arcs, spanning ± 60° of couch rotation, were used. In the case of the GK unit, all 201 sources were left unplugged. Radiation was delivered using 3- and 8-mm LINAC collimators and 4- and 8-mm collimators of the GK unit. Targeting ability was investigated independently on the dedicated LINAC by using a primate model. Measured 50% spot widths for multisource, single-shot radiation exceeded nominal values in all cases by 38 to 70% for the GK unit and 11 to 33% for the LINAC system. Measured offsets were indicative of submillimeter targeting precision on both devices. In primate studies, the appearance of an magnetic resonance imaging—enhancing lesion coincided with the intended target. Conclusions. Radiosurgery performed using the 3-mm collimator of the dedicated LINAC exhibited characteristics that compared favorably with those of a dedicated GK unit. Overall targeting accuracy in the submillimeter range can be achieved, and dose distributions with sharp falloff can be expected for both devices.


2019 ◽  
Author(s):  
Nina Wressnigg ◽  
Romana Hochreiter ◽  
Oliver Zoihsl ◽  
Andrea Fritzer ◽  
Nicole Bézay ◽  
...  

2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1718
Author(s):  
Chien-Hsing Chou ◽  
Yu-Sheng Su ◽  
Che-Ju Hsu ◽  
Kong-Chang Lee ◽  
Ping-Hsuan Han

In this study, we designed a four-dimensional (4D) audiovisual entertainment system called Sense. This system comprises a scene recognition system and hardware modules that provide haptic sensations for users when they watch movies and animations at home. In the scene recognition system, we used Google Cloud Vision to detect common scene elements in a video, such as fire, explosions, wind, and rain, and further determine whether the scene depicts hot weather, rain, or snow. Additionally, for animated videos, we applied deep learning with a single shot multibox detector to detect whether the animated video contained scenes of fire-related objects. The hardware module was designed to provide six types of haptic sensations set as line-symmetry to provide a better user experience. After the system considers the results of object detection via the scene recognition system, the system generates corresponding haptic sensations. The system integrates deep learning, auditory signals, and haptic sensations to provide an enhanced viewing experience.


Sign in / Sign up

Export Citation Format

Share Document