The Shrank YoloV3-tiny for spinal fracture lesions detection

2021 ◽  
pp. 1-20
Author(s):  
Gang Sha ◽  
Junsheng Wu ◽  
Bin Yu

Purpose: at present, more and more deep learning algorithms are used to detect and segment lesions from spinal CT (Computed Tomography) images. But these algorithms usually require computers with high performance and occupy large resources, so they are not suitable for the clinical embedded and mobile devices, which only have limited computational resources and also expect a relative good performance in detecting and segmenting lesions. Methods: in this paper, we present a model based on Yolov3-tiny to detect three spinal fracture lesions, cfracture (cervical fracture), tfracture (thoracic fracture), and lfracture (lumbar fracture) with a small size model. We construct this novel model by replacing the traditional convolutional layers in YoloV3-tiny with fire modules from SqueezeNet, so as to reduce the parameters and model size, meanwhile get accurate lesions detection. Then we remove the batch normalization layers in the fire modules after the comparative experiments, though the overall performance of fire module without batch normalization layers is slightly improved, we can reduce computation complexity and low occupations of computer resources for fast lesions detection. Results: the experiments show that the shrank model only has a size of 13 MB (almost a third of Yolov3-tiny), while the mAP (mean Average Precsion) is 91.3%, and IOU (intersection over union) is 90.7. The detection time is 0.015 second per CT image, and BFLOP/s (Billion Floating Point Operations per Second) value is less than Yolov3-tiny. Conclusion: the model we presented can be deployed in clinical embedded and mobile devices, meanwhile has a relative accurate and rapid real-time lesions detection.

2021 ◽  
pp. 1-15
Author(s):  
Gang Sha ◽  
Junsheng Wu ◽  
Bin Yu

Purpose: Reading spinal CT (Computed Tomography) images is very important in the diagnosis of spondylosis, which is time-consuming and prones to make biases. In this paper, we propose a framework based on Faster-RCNN to improve detection performances of three spinal fracture lesions: cfracture (cervical fracture), tfracture (thoracic fracture) and lfracture (lumbar fracture). Methods: First, we use ResNet50 to replace VGG16 in backbone network in Faster-RCNN to increase depth of training network. Second, we utilize soft-NMS (Non-Maximum Suppression) instead of NMS to avoid missed detection of overlapped lesions. Third, we simplify RPN (Region Proposal Network) to accelerate training speed and reduce missed detection. Finally, we modify the classifier layer in Faster-RCNN and choose appropriate length-width ratio by changing anchor sizes in sliding window, then adopt multi-scale strategy in training to improve efficiency and accuracy. Results: The experimental results show that the proposed scheme has a good performance, mAP (mean average precision) is 90.6%, IOU (Intersection of Union) is 88.5 and detection time is 0.053 second per CT image, which means our proposed method can accurately detect spinal fracture lesions. Conclusion: Our proposed method can provide assistance and scientific references for both doctors and patients in clinically.


Author(s):  
Chakkrit Termritthikun ◽  
Paisarn Muneesawang

The growth of high-performance mobile devices has resulted in more research into on-device image recognition. The research problems have been the latency and accuracy of automatic recognition, which remain as obstacles to its real-world usage. Although the recently developed deep neural networks can achieve accuracy comparable to that of a human user, some of them are still too slow. This paper describes the development of the architecture of a new convolutional neural network model, NU-LiteNet. For this, SqueezeNet was developed to reduce the model size to a degree suitable for smartphones. The model size of NU-LiteNet was therefore 2.6 times smaller than that of SqueezeNet. The model outperformed other Convolutional Neural Network (CNN) models for mobile devices (eg. SqueezeNet and MobileNet) with an accuracy of 81.15% and 69.58% on Singapore and Paris landmark datasets respectively. The shortest execution time of 0.7 seconds per image was recorded with NU-LiteNet on mobile phones.


2009 ◽  
pp. 254-261
Author(s):  
Loreno Oliveira ◽  
Emerson Loureiro ◽  
Hyggo Almeida ◽  
Angelo Perkusich

Nowadays, we are experiencing an increasing use of mobile and embedded devices. These devices, aided by the emergence of new wireless technologies and software paradigms, among other technological conquests, are providing means to accomplish the vision of a new era in computer science. In this vision, the way we create and use computational systems changes drastically for a model where computers loose their “computer appearance.” Their sizes were reduced, cables were substituted by wireless connections, and they are becoming part of everyday objects, such as clothes, automobiles, and domestic equipments. Initially called ubiquitous computing, this paradigm of computation is also known as pervasive computing (Weiser, 1991). It is mainly characterized by the use of portable devices that interact with other portable devices and resources from wired networks to offer personalized services to the users. While leveraging pervasive computing, these portable devices also bring new challenges to the research in this area. The major problems arise from the limitations of the devices. At the same time that pervasive computing was attaining space within the research community, the field of grid computing (Foster, Kesselman, & Tuecke, 2001) was also gaining visibility and growing in maturity and importance. More than just a low cost platform for high performance computing, grid computing emerges as a solution for virtualization and sharing of computational resources. In the context of virtual organizations, both grid and pervasive computing assemble a number of features that are quite desirable for several scenarios within this field, notably the exchanging of information and computational resources among environments and organizations. The features of these technologies are enabling system designers to provide newer and enhanced kinds of services within different contexts, such as industry, marketing, commerce, education, businesses, and convenience. Furthermore, as time goes on, researchers have made attempts of extracting and incorporating the better of the two technologies, thus fostering the evolution of existing solutions and the development of new applications. On the one hand, pervasive computing researchers are essentially interested in using wired grids to hide the limitations of mobile devices. On the other hand, grid computing researchers are broadening the diversity of resources adhered to the grid by incorporating mobile devices. This chapter presents part of our experiences in the research of both pervasive and grid computing. We start with an overview about grid and pervasive technologies. Following, there are described and discussed approaches for combining pervasive and grid computing. These approaches are presented from both perspectives of grid and pervasive computing research. Finally, in the last section, there are presented our criticisms about the approaches discussed and our hopes about the future steps for this blend of technologies.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Mingliang Li ◽  
Jianmin Pang ◽  
Feng Yue ◽  
Fudong Liu ◽  
Jun Wang ◽  
...  

Dynamic binary translation (DBT) is gaining importance in mobile computing. Mobile Edge Computing (MEC) augments mobile devices with powerful servers, whereas edge servers and smartphones are usually based on heterogeneous architecture. To leverage high-performance resources on servers, code offloading is an ideal approach that relies on DBT. In addition, mobile devices equipped with multicore processors and GPU are becoming ubiquitous. Migrating x86_64 application binaries to mobile devices by using DBT can also make a contribution to providing various mobile applications, e.g., multimedia applications. However, the translation efficiency and overall performance of DBT for application migration are not satisfactory, because of runtime overhead and low quality of the translated code. Meanwhile, traditional DBT systems do not fully exploit the computational resources provided by multicore processors, especially when translating sequential guest applications. In this work, we focus on leveraging ubiquitous multicore processors to improve DBT performance by parallelizing sequential applications during translation. For that, we propose LLPEMU, a DBT framework that combines binary translation with polyhedral optimization. We investigate the obstacles of adapting existing polyhedral optimization in compilers to DBT and present a feasible method to overcome these issues. In addition, LLPEMU adopts static-dynamic combination to ensure that sequential binaries are parallelized while incurring low runtime overhead. Our evaluation results show that LLPEMU outperforms QEMU significantly on the PolyBench benchmark.


2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


2014 ◽  
Vol 543-547 ◽  
pp. 2418-2421
Author(s):  
Yong Wang

In this paper we introduce cross tree and block mathematical principles into the design of database system, divide the time sequence and storage space of computer database system, establish the mathematical model and algorithm of computer resources database system, and design the test database system. In this paper, we use high performance interface of Display Port, by way of coupling to communicate on two port control, and use RHEL 6.2 Linux virtual machine to do simulation experiment on process of database system. Through the simulation we find the API which is called by Read, Close, Mmap, Stat, Fstat is similar. It is consistent with the actual situation, and verifies the reliability of the program. Finally, we apply the database system to the network database construction of sports literature resources in the new town of Poyang Lake area. It reaches the effect that sport resources are shared by all. It provides technical support for the application of computer database system.


2021 ◽  
Vol 11 ◽  
Author(s):  
Yubizhuo Wang ◽  
Jiayuan Shao ◽  
Pan Wang ◽  
Lintao Chen ◽  
Mingliang Ying ◽  
...  

BackgroundOur aim was to establish a deep learning radiomics method to preoperatively evaluate regional lymph node (LN) staging for hilar cholangiocarcinoma (HC) patients. Methods and MaterialsOf the 179 enrolled HC patients, 90 were pathologically diagnosed with lymph node metastasis. Quantitative radiomic features and deep learning features were extracted. An LN metastasis status classifier was developed through integrating support vector machine, high-performance deep learning radiomics signature, and three clinical characteristics. An LN metastasis stratification classifier (N1 vs. N2) was also proposed with subgroup analysis.ResultsThe average areas under the receiver operating characteristic curve (AUCs) of the LN metastasis status classifier reached 0.866 in the training cohort and 0.870 in the external test cohorts. Meanwhile, the LN metastasis stratification classifier performed well in predicting the risk of LN metastasis, with an average AUC of 0.946.ConclusionsTwo classifiers derived from computed tomography images performed well in predicting LN staging in HC and will be reliable evaluation tools to improve decision-making.


2013 ◽  
Vol 69 (7) ◽  
pp. 1274-1282 ◽  
Author(s):  
Nicholas K. Sauter ◽  
Johan Hattne ◽  
Ralf W. Grosse-Kunstleve ◽  
Nathaniel Echols

Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h−1) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in real time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (cctbx.spotfinder) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (cctbx.xfel) for serial femtosecond crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units.


2015 ◽  
pp. 1933-1955
Author(s):  
Tolga Soyata ◽  
He Ba ◽  
Wendi Heinzelman ◽  
Minseok Kwon ◽  
Jiye Shi

With the recent advances in cloud computing and the capabilities of mobile devices, the state-of-the-art of mobile computing is at an inflection point, where compute-intensive applications can now run on today's mobile devices with limited computational capabilities. This is achieved by using the communications capabilities of mobile devices to establish high-speed connections to vast computational resources located in the cloud. While the execution scheme based on this mobile-cloud collaboration opens the door to many applications that can tolerate response times on the order of seconds and minutes, it proves to be an inadequate platform for running applications demanding real-time response within a fraction of a second. In this chapter, the authors describe the state-of-the-art in mobile-cloud computing as well as the challenges faced by traditional approaches in terms of their latency and energy efficiency. They also introduce the use of cloudlets as an approach for extending the utility of mobile-cloud computing by providing compute and storage resources accessible at the edge of the network, both for end processing of applications as well as for managing the distribution of applications to other distributed compute resources.


Author(s):  
Atta ur Rehman Khan ◽  
Abdul Nasir Khan

Mobile devices are gaining high popularity due to support for a wide range of applications. However, the mobile devices are resource constrained and many applications require high resources. To cater to this issue, the researchers envision usage of mobile cloud computing technology which offers high performance computing, execution of resource intensive applications, and energy efficiency. This chapter highlights importance of mobile devices, high performance applications, and the computing challenges of mobile devices. It also provides a brief introduction to mobile cloud computing technology, its architecture, types of mobile applications, computation offloading process, effective offloading challenges, and high performance computing application on mobile devises that are enabled by mobile cloud computing technology.


Sign in / Sign up

Export Citation Format

Share Document