scholarly journals Detection of Pediatric Femur Configuration on X-ray Images

2021 ◽  
Vol 11 (20) ◽  
pp. 9538
Author(s):  
Marta Drążkowska

In this paper, we present a fully automatic solution for denoting bone configuration on two-dimensional images. A dataset of 300 X-ray images of children’s knee joints was collected. The strict experimental protocol established in this study increased the difficulty of post-processing. Therefore, we tackled the problem of obtaining reliable information from medical image data of insufficient quality. We proposed a set of features that unambiguously denoted configuration of the bone on the image, namely the femur. It was crucial to define the features that were independent of age, since age variability of subjects was high. Subsequently, we defined image keypoints directly corresponding to those features. Their positions were used to determine the coordinate system denoting femur configuration. A complex keypoint detector was proposed, composed of two different estimator architectures: gradient-based and based on the convolutional neural network. The positions of the keypoints were used to determine the configuration of the femur on each image frame. The overall performance of both estimators working in parallel was evaluated using X-ray images from the publicly available LERA dataset.

Ideally, secure transmission of medical image data is one of the major challenges in health sector. The National Health Information Network has to protect the data in confidential manner. Storage is also one of the basic concern along with secure transmission. In this paper we propose an algorithm that supports confidentiality, authentication and integrity implementation of the scrambled data before transmitting on the communication medium. Before communication the data is compressed while keeping data encrypted. The research work demonstrate with simulation results. The results shows that the proposed work effectively maintains confidentiality, authentication and integrity. The experimental results evaluated medical image quality like PSNR, MSE, SC, and NAEetc.


Author(s):  
Karthik K ◽  
Sowmya S Kamath

Abstract The detailed physiological perspectives captured by medical imaging provides actionable insights to doctors to manage comprehensive care of patients. However, the quality of such diagnostic image modalities is often affected by mismanagement of the image capturing process by poorly trained technicians and older/poorly maintained imaging equipment. Further, a patient is often subjected to scanning at different orientations to capture the frontal, lateral and sagittal views of the affected areas. Due to the large volume of diagnostic scans performed at a modern hospital, adequate documentation of such additional perspectives is mostly overlooked, which is also an essential key element of quality diagnostic systems and predictive analytics systems. Another crucial challenge affecting effective medical image data management is that the diagnostic scans are essentially stored as unstructured data, lacking a well-defined processing methodology for enabling intelligent image data management for supporting applications like similar patient retrieval , automated disease prediction etc. One solution is to incorporate automated diagnostic image descriptions of the observation/findings by leveraging computer vision and natural language processing. In this work, we present multi-task neural models capable of addressing these critical challenges. We propose ESRGAN, an image enhancement technique for improving the quality and visualization of medical chest X-ray images, thereby substantially improving the potential for accurate diagnosis, automatic detection and region-of-interest segmentation. We also propose a CNN-based model called ViewNet for predicting the view orientation of the X-ray image and generating a medical report using Xception net, thus facilitating a robust medical image management system for intelligent diagnosis applications. Experimental results are demonstrated using standard metrics like BRISQUE, PIQE and BLEU scores, indicating that the proposed models achieved excellent performance. Further, the proposed deep learning approaches enable diagnosis in a lesser time and their hybrid architecture shows significant potential for supporting many intelligent diagnosis applications.


Author(s):  
Amalia Charisi ◽  
Panagiotis Korvesis ◽  
Vasileios Megalooikonomou

In this paper, the authors propose a method for medical image retrieval in distributed systems to facilitate telemedicine. The proposed framework can be used by a network of healthcare centers, where some can be remotely located, assisting in diagnosis without the necessary transfer of patients. Security and confidentiality issues of medical data are expected, which are handled at the local site following the procedures and protocols of each institution. To make the search more effective, the authors introduce a distributed index based on features that are extracted from each image. Considering network bandwidth limitations and other restrictions that are associated with handling medical data, the images are processed locally and a pointer is distributed in the network. For the distribution of this pointer, the authors propose a function that maps the pointer of each image to a node with similar contents.


Sign in / Sign up

Export Citation Format

Share Document