scholarly journals A CUDA-powered method for the feature extraction and unsupervised analysis of medical images

Author(s):  
Leonardo Rundo ◽  
Andrea Tangherloni ◽  
Paolo Cazzaniga ◽  
Matteo Mistri ◽  
Simone Galimberti ◽  
...  

AbstractImage texture extraction and analysis are fundamental steps in computer vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance because they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we present a novel method, called CHASM (Cuda, HAralick & SoM), which is accelerated on the graphics processing unit (GPU) for quantitative imaging analyses based on Haralick features and on the self-organizing map (SOM). The Haralick features extraction step relies upon the gray-level co-occurrence matrix, which is computationally burdensome on medical images characterized by a high bit depth. The downstream analyses exploit the SOM with the goal of identifying the underlying clusters of pixels in an unsupervised manner. CHASM is conceived to leverage the parallel computation capabilities of modern GPUs. Analyzing ovarian cancer computed tomography images, CHASM achieved up to $$\sim 19.5\times $$ ∼ 19.5 × and $$\sim 37\times $$ ∼ 37 × speed-up factors for the Haralick feature extraction and for the SOM execution, respectively, compared to the corresponding C++ coded sequential versions. Such computational results point out the potential of GPUs in the clinical research.

2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Adeoluwa Akande ◽  
Ana Cristina Costa ◽  
Jorge Mateu ◽  
Roberto Henriques

The explosion of data in the information age has provided an opportunity to explore the possibility of characterizing the climate patterns using data mining techniques. Nigeria has a unique tropical climate with two precipitation regimes: low precipitation in the north leading to aridity and desertification and high precipitation in parts of the southwest and southeast leading to large scale flooding. In this research, four indices have been used to characterize the intensity, frequency, and amount of rainfall over Nigeria. A type of Artificial Neural Network called the self-organizing map has been used to reduce the multiplicity of dimensions and produce four unique zones characterizing extreme precipitation conditions in Nigeria. This approach allowed for the assessment of spatial and temporal patterns in extreme precipitation in the last three decades. Precipitation properties in each cluster are discussed. The cluster closest to the Atlantic has high values of precipitation intensity, frequency, and duration, whereas the cluster closest to the Sahara Desert has low values. A significant increasing trend has been observed in the frequency of rainy days at the center of the northern region of Nigeria.


2021 ◽  
Vol 11 (9) ◽  
pp. 3754
Author(s):  
René Reiss ◽  
Frank Hauser ◽  
Sven Ehlert ◽  
Michael Pütz ◽  
Ralf Zimmermann

While fast and reliable analytical results are crucial for first responders to make adequate decisions, these can be difficult to establish, especially at large-scale clandestine laboratories. To overcome this issue, multiple techniques at different levels of complexity are available. In addition to the level of complexity their information value differs as well. Within this publication, a comparison between three techniques that can be applied for on-site analysis is performed. These techniques range from ones with a simple yes or no response to sophisticated ones that allows to receive complex information about a sample. The three evaluated techniques are immunoassay drug tests representing easy to handle and fast to explain systems, ion mobility spectrometry as state-of-the-art equipment that needs training and experience prior to use and ambient pressure laser desorption with the need for a highly skilled operator as possible future technique that is currently under development. In addition to the measurement of validation parameters, real case samples are investigated to obtain practically relevant information about the capabilities and limitations of these techniques for on-site operations. Results demonstrate that in general all techniques deliver valid results, but the bandwidth of information widely varies between the investigated techniques.


Author(s):  
Luke Gallagher ◽  
Antonio Mallia ◽  
J. Shane Culpepper ◽  
Torsten Suel ◽  
B. Barla Cambazoglu

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 253
Author(s):  
Yosang Jeong ◽  
Hoon Ryu

The non-equilibrium Green’s function (NEGF) is being utilized in the field of nanoscience to predict transport behaviors of electronic devices. This work explores how much performance improvement can be driven for quantum transport simulations with the aid of manycore computing, where the core numerical operation involves a recursive process of matrix multiplication. Major techniques adopted for performance enhancement are data restructuring, matrix tiling, thread scheduling, and offload computing, and we present technical details on how they are applied to optimize the performance of simulations in computing hardware, including Intel Xeon Phi Knights Landing (KNL) systems and NVIDIA general purpose graphic processing unit (GPU) devices. With a target structure of a silicon nanowire that consists of 100,000 atoms and is described with an atomistic tight-binding model, the effects of optimization techniques on the performance of simulations are rigorously tested in a KNL node equipped with two Quadro GV100 GPU devices, and we observe that computation is accelerated by a factor of up to ∼20 against the unoptimized case. The feasibility of handling large-scale workloads in a huge computing environment is also examined with nanowire simulations in a wide energy range, where good scalability is procured up to 2048 KNL nodes.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4638
Author(s):  
Simon Pratschner ◽  
Pavel Skopec ◽  
Jan Hrdlicka ◽  
Franz Winter

A revolution of the global energy industry is without an alternative to solving the climate crisis. However, renewable energy sources typically show significant seasonal and daily fluctuations. This paper provides a system concept model of a decentralized power-to-green methanol plant consisting of a biomass heating plant with a thermal input of 20 MWth. (oxyfuel or air mode), a CO2 processing unit (DeOxo reactor or MEA absorption), an alkaline electrolyzer, a methanol synthesis unit, an air separation unit and a wind park. Applying oxyfuel combustion has the potential to directly utilize O2 generated by the electrolyzer, which was analyzed by varying critical model parameters. A major objective was to determine whether applying oxyfuel combustion has a positive impact on the plant’s power-to-liquid (PtL) efficiency rate. For cases utilizing more than 70% of CO2 generated by the combustion, the oxyfuel’s O2 demand is fully covered by the electrolyzer, making oxyfuel a viable option for large scale applications. Conventional air combustion is recommended for small wind parks and scenarios using surplus electricity. Maximum PtL efficiencies of ηPtL,Oxy = 51.91% and ηPtL,Air = 54.21% can be realized. Additionally, a case study for one year of operation has been conducted yielding an annual output of about 17,000 t/a methanol and 100 GWhth./a thermal energy for an input of 50,500 t/a woodchips and a wind park size of 36 MWp.


2018 ◽  
Vol 7 (12) ◽  
pp. 472 ◽  
Author(s):  
Bo Wan ◽  
Lin Yang ◽  
Shunping Zhou ◽  
Run Wang ◽  
Dezhi Wang ◽  
...  

The road-network matching method is an effective tool for map integration, fusion, and update. Due to the complexity of road networks in the real world, matching methods often contain a series of complicated processes to identify homonymous roads and deal with their intricate relationship. However, traditional road-network matching algorithms, which are mainly central processing unit (CPU)-based approaches, may have performance bottleneck problems when facing big data. We developed a particle-swarm optimization (PSO)-based parallel road-network matching method on graphics-processing unit (GPU). Based on the characteristics of the two main stages (similarity computation and matching-relationship identification), data-partition and task-partition strategies were utilized, respectively, to fully use GPU threads. Experiments were conducted on datasets with 14 different scales. Results indicate that the parallel PSO-based matching algorithm (PSOM) could correctly identify most matching relationships with an average accuracy of 84.44%, which was at the same level as the accuracy of a benchmark—the probability-relaxation-matching (PRM) method. The PSOM approach significantly reduced the road-network matching time in dealing with large amounts of data in comparison with the PRM method. This paper provides a common parallel algorithm framework for road-network matching algorithms and contributes to integration and update of large-scale road-networks.


2011 ◽  
Vol 19 (4) ◽  
pp. 781-794 ◽  
Author(s):  
Jeong Euy Park ◽  
Chern-En Chiang ◽  
Muhammad Munawar ◽  
Gia Khai Pham ◽  
Apichard Sukonthasarn ◽  
...  

Background: Treatment of hypercholesterolaemia in Asia is rarely evaluated on a large scale, and data on treatment outcome are scarce. The Pan-Asian CEPHEUS study aimed to assess low-density lipoprotein cholesterol (LDL-C) goal attainment among patients on lipid-lowering therapy. Methods: This survey was conducted in eight Asian countries. Hypercholesterolaemic patients aged ≥18 years who had been on lipid-lowering treatment for ≥3 months (stable medication for ≥6 weeks) were recruited, and lipid concentrations were measured. Demographic and other clinically relevant information were collected, and the cardiovascular risk of each patient was determined. Definitions and criteria set by the updated 2004 National Cholesterol Education Program guidelines were applied. Results: In this survey, 501 physicians enrolled 8064 patients, of whom 7281 were included in the final analysis. The mean age was 61.0 years, 44.4% were female, and 85.1% were on statin monotherapy. LDL-C goal attainment was reported in 49.1% of patients overall, including 51.2% of primary and 48.7% of secondary prevention patients, and 36.6% of patients with familial hypercholesterolaemia. The LDL-C goal was attained in 75.4% of moderate risk, 55.4% of high risk, and only 34.9% of very high-risk patients. Goal attainment was directly related to age and inversely related to cardiovascular risk and baseline LDL-C. Conclusion: A large proportion of Asian hypercholesterolaemic patients on lipid-lowering drugs are not at recommended LDL-C levels and remain at risk for cardiovascular disease. Given the proven efficacy of lipid-lowering drugs in the reduction of LDL-C, there is room for further optimization of treatments to maximize benefits and improve outcomes.


Sign in / Sign up

Export Citation Format

Share Document