The Open Biomedical Engineering Journal
Latest Publications


TOTAL DOCUMENTS

298
(FIVE YEARS 52)

H-INDEX

21
(FIVE YEARS 2)

Published By Bentham Science

1874-1207

2021 ◽  
Vol 15 (1) ◽  
pp. 190-203
Author(s):  
Gargee Vaidya ◽  
Shreya Chandrasekhar ◽  
Ruchi Gajjar ◽  
Nagendra Gajjar ◽  
Deven Patel ◽  
...  

Background: The process of In Vitro Fertilization (IVF) involves collecting multiple samples of mature eggs that are fertilized with sperms in the IVF laboratory. They are eventually graded, and the most viable embryo out of all the samples is selected for transfer in the mother’s womb for a healthy pregnancy. Currently, the process of grading and selecting the healthiest embryo is performed by visual morphology, and manual records are maintained by embryologists. Objectives: Maintaining manual records makes the process very tedious, time-consuming, and error-prone. The absence of a universal grading leads to high subjectivity and low success rate of pregnancy. To improve the chances of pregnancy, multiple embryos are transferred in the womb elevating the risk of multiple pregnancies. In this paper, we propose a deep learning-based method to perform the automatic grading of the embryos using time series prediction with Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN). Methods: CNN extracts the features of the images of embryos, and a sequence of such features is fed to LSTM for time series prediction, which gives the final grade. Results: Our model gave an ideal accuracy of 100% on training and validation. A comparison of obtained results is made with those obtained from a GRU model as well as other pre-trained models. Conclusion: The automated process is robust and eliminates subjectivity. The days-long hard work can now be replaced with our model, which gives the grading within 8 seconds with a GPU.


2021 ◽  
Vol 15 (1) ◽  
pp. 235-248
Author(s):  
Mayank R. Kapadia ◽  
Chirag N. Paunwala

Introduction: Content Based Image Retrieval (CBIR) system is an innovative technology to retrieve images from various media types. One of the CBIR applications is Content Based Medical Image Retrieval (CBMIR). The image retrieval system retrieves the most similar images from the historical cases, and such systems can only support the physician's decision to diagnose a disease. To extract the useful features from the query image for linking similar types of images is the major challenge in the CBIR domain. The Convolution Neural Network (CNN) can overcome the drawbacks of traditional algorithms, dependent on the low-level feature extraction technique. Objective: The objective of the study is to develop a CNN model with a minimum number of convolution layers and to get the maximum possible accuracy for the CBMIR system. The minimum number of convolution layers reduces the number of mathematical operations and the time for the model's training. It also reduces the number of training parameters, like weights and bias. Thus, it reduces the memory requirement for the model storage. This work mainly focused on developing an optimized CNN model for the CBMIR system. Such systems can only support the physicians' decision to diagnose a disease from the images and retrieve the relevant cases to help the doctor decide the precise treatment. Methods: The deep learning-based model is proposed in this paper. The experiment is done with several convolution layers and various optimizers to get the maximum accuracy with a minimum number of convolution layers. Thus, the ten-layer CNN model is developed from scratch and used to derive the training and testing images' features and classify the test image. Once the image class is identified, the most relevant images are determined based on the Euclidean distance between the query features and database features of the identified class. Based on this distance, the most relevant images are displayed from the respective class of images. The general dataset CIFAR10, which has 60,000 images of 10 different classes, and the medical dataset IRMA, which has 2508 images of 9 various classes, have been used to analyze the proposed method. The proposed model is also applied for the medical x-ray image dataset of chest disease and compared with the other pre-trained models. Results: The accuracy and the average precision rate are the measurement parameters utilized to compare the proposed model with different machine learning techniques. The accuracy of the proposed model for the CIFAR10 dataset is 93.9%, which is better than the state-of-the-art methods. After the success for the general dataset, the model is also tested for the medical dataset. For the x-ray images of the IRMA dataset, it is 86.53%, which is better than the different pre-trained model results. The model is also tested for the other x-ray dataset, which is utilized to identify chest-related disease. The average precision rate for such a dataset is 97.25%. Also, the proposed model fulfills the major challenge of the semantic gap. The semantic gap of the proposed model for the chest disease dataset is 2.75%, and for the IRMA dataset, it is 13.47%. Also, only ten convolution layers are utilized in the proposed model, which is very small in number compared to the other pre-trained models. Conclusion: The proposed technique shows remarkable improvement in performance metrics over CNN-based state-of-the-art methods. It also offers a significant improvement in performance metrics over different pre-trained models for the two different medical x-ray image datasets.


2021 ◽  
Vol 15 (1) ◽  
pp. 170-179
Author(s):  
Kathiravan Srinivasan ◽  
Ramaneswaran Selvakumar ◽  
Sivakumar Rajagopal ◽  
Dimiter Georgiev Velev ◽  
Branislav Vuksanovic

Recently, significant research has been done in Super-Resolution (SR) methods for augmenting the spatial resolution of the Magnetic Resonance (MR) images, which aids the physician in improved disease diagnoses. Single SR methods have drawbacks; they fail to capture self-similarity in non-local patches and are not robust to noise. To exploit the non-local self-similarity and intrinsic sparsity in MR images, this paper proposes the use of Cluster-Sparse Assisted Super-Resolution. This SR method effectively captures similarity in non-locally positioned patches by training on clusters of patches using a self-adaptive dictionary. This method of training also leads to better edge and texture detection. Experiments show that using Cluster-Sparse Assisted Super-Resolution for brain MR images results in enhanced detection of lesions leading to better diagnosis.


2021 ◽  
Vol 15 (1) ◽  
pp. 131-131
Author(s):  
Rahul K. Kher ◽  
Chirag Paunwala ◽  
Falgun Thakkar ◽  
Heena Kher ◽  
Mita Paunwala

2021 ◽  
Vol 15 (1) ◽  
pp. 119-130
Author(s):  
Abdelghani Moussaid ◽  
Hassan Bouaouine ◽  
Nabil Ngote

Objective: The present investigation is focused on a self-assessment of the biomedical activity related to embedded Medical Devices on board a fleet of 46 EMS medicalized ambulances, according to the High Authority of Health standard (criterion 8K) and the Guide of the Good Practices of Biomedical Engineering. Materials and Methods: The methodology adopted for this purpose is based on an analysis allowing the evaluation and observation of practices related to biomedical activity in these ambulances. An initial assessment, carried out in March 2021, made it possible to measure the gaps between the actual situation and the recommendations of the two self-diagnosis tools (High Authority of Health and Guide of the Good Practices of Biomedical Engineering standards). A series of corrective actions were proposed and then implemented. A second self-assessment took place after 6 months, in October 2021. Results: Between March and October 2021, an improvement in the scores for almost all the axes of the two self-assessment tools was noted. Indeed, the score of the self-assessment for the High Authority of Health reference system rose from 44% in March 2021 to 63% in October 2021, i.e. an increase of 19%, and that of the Guide of the Good Practices of Biomedical Engineering increased from 67.54% in March 2021 to 80.96% in October 2021, i.e. an increase of 13.42%. Conclusion: The implementation of a maintenance strategy integrating the notion of quality, relevant procedures and pertinent work tools has made it possible to significantly improve the biomedical activity within the medical ambulances and to optimise the embedded medical devices.


2021 ◽  
Vol 15 (1) ◽  
pp. 105-114
Author(s):  
Vahid R. Nafisi ◽  
Roshanak Ghods

Background: In Persian Medicine (PM), measuring the wrist temperature/humidity and pulse is one of the main methods for determining a person's health status and temperament. An important problem is the dependence of the diagnosis on the physician's interpretation of the above-mentioned criteria. Perhaps this is one reason why this method has yet to be combined with modern medical methods. Also, sometimes there is a need to use PM to diagnose patients remotely, especially during a pandemic. This brings up the question of how to implement PM into a telecare system. This study addresses these concerns and outlines a system for measuring pulse signals and temperament detection based on PM. Methods: A system was designed and clinically implemented based on PM that uses data from recorded thermal distribution, a temperament questionnaire, and a customized device that logs the pulse waves on the wrist. This system was used for patient care via telecare. Results: The temperaments of 34 participants were assessed by a PM specialist using the standardized Mojahedi Mizaj Questionnaire (MMQ). Thermal images of the wrist in the supine position (named Malmas in PM), the back of the hand, and the entire face were also recorded under the supervision of the physician. Also, the wrist pulse waves were evaluated by a customized pulse measurement device. Finally, the collected data could be sent to a physician via a telecare system for further interpretation and prescription of medications. Conclusion: This preliminary study focused on the implementation of a combinational hardware-software system for patient assessment based on PM. It appears that the design and construction of a customized device that can measure the pulse waves, and some other criteria, according to PM, is possible and can decrease the dependency of the diagnostic to PM specialists. Thus, it can be incorporated into a telemedicine system.


2021 ◽  
Vol 15 (1) ◽  
pp. 204-212
Author(s):  
Nishant Jain ◽  
Arvind Yadav ◽  
Yogesh Kumar Sariya ◽  
Arun Balodi

Background: Medical image fusion methods are applied to a wide assortment of medical fields, for example, computer-assisted diagnosis, telemedicine, radiation treatment, preoperative planning, and so forth. Computed Tomography (CT) is utilized to scan the bone structure, while Magnetic Resonance Imaging (MRI) is utilized to examine the soft tissues of the cerebrum. The fusion of the images obtained from the two modalities helps radiologists diagnose the abnormalities in the brain and localize the position of the abnormality concerning the bone. Methods: Multimodal medical image fusion procedure contributes to the decrease of information vulnerability and improves the clinical diagnosis exactness. The motive is to protect salient features from multiple source images to produce an upgraded fused image. The CT-MRI image fusion study made it conceivable to analyze the two modalities straightforwardly. Several states of the art techniques are available for the fusion of CT & MRI images. The discrete wavelet transform (DWT) is one of the widely used transformation techniques for the fusion of images. However, the efficacy of utilization of the variants of wavelet filters for the decomposition of the images, which may improve the image fusion quality, has not been studied in detail. Therefore the objective of this study is to assess the utility of wavelet families for the fusion of CT and MRI images. In this paper investigation on the efficacy of 8 wavelet families (120 family members) on the visual quality of the fused CT & MRI image has been performed. Further, to strengthen the quality of the fused image, two quantitative performance evaluation parameters, namely classical and gradient information, have been calculated. Results: Experimental results demonstrate that amongst the 120 wavelet family members (8 wavelet families), db1, rbio1.1, and Haar wavelets have outperformed other wavelet family members in both qualitative and quantitative analysis. Conclusion: Quantitative and qualitative analysis shows that the fused image may help radiologists diagnose the abnormalities in the brain and localize the position of the abnormality concerning the bone more easily. For further improvement in the fused results, methods based on deep learning may be tested in the future.


2021 ◽  
Vol 15 (1) ◽  
pp. 180-189
Author(s):  
Shital D. Bhatt ◽  
Himanshu B. Soni

Background: Lung cancer is among the major causes of death in the world. Early detection of lung cancer is a major challenge. These encouraged the development of Computer-Aided Detection (CAD) system. Objectives: We designed a CAD system for performance improvement in detecting and classifying pulmonary nodules. Though the system will not replace radiologists, it will be helpful to them in order to accurately diagnose lung cancer. Methods: The architecture comprises of two steps, among which in the first step CT scans are pre-processed and the candidates are extracted using the positive and negative annotations provided along with the LUNA16 dataset, and the second step consists of three different neural networks for classifying the pulmonary nodules obtained from the first step. The models in the second step consist of 2D-Convolutional Neural Network (2D-CNN), Visual Geometry Group-16 (VGG-16) and simplified VGG-16, which independently classify pulmonary nodules. Results: The classification accuracies achieved for 2D-CNN, VGG-16 and simplified VGG-16 were 99.12%, 98.17% and 99.60%, respectively. Conclusion: The integration of deep learning techniques along with machine learning and image processing can serve as a good means of extracting pulmonary nodules and classifying them with improved accuracy. Based on these results, it can be concluded that the transfer learning concept will improve system performance. In addition, performance improves proper designing of the CAD system by considering the amount of dataset and the availability of computing power.


2021 ◽  
Vol 15 (1) ◽  
pp. 226-235
Author(s):  
Ojas A. Ramwala ◽  
Poojan Dalal ◽  
Parima Parikh ◽  
Upena Dalal ◽  
Mita C. Paunwala ◽  
...  

Background: The upsurge of COVID-19 has received significant international contemplation considering its life-threatening ramifications. To ensure that the susceptible patients can be quarantined to control the spread of the disease during the incubation period of the coronavirus, it becomes imperative to automatically and non-invasively mass screen patients. The diagnosis using RT-PCR is arduous and time-consuming. Currently, the non-invasive mass screening of susceptible cases is being performed by utilizing the thermal screening technique. However, with the consumption of paracetamol, the symptoms of fever can be suppressed. Methods: A novel multi-modal approach has been proposed. Throat inflammation-based mass screening and early prediction followed by Chest X-Ray based diagnosis have been proposed. Depth-wise separable convolutions have been utilized by fine-tuning Xception Net and Mobile Net architectures. NADAM optimizer has been leveraged to promote faster convergence. Results: The proposed method achieved 91% accuracy on the throat inflammation identification task and 96% accuracy on chest radiography conducted on the dataset. Conclusion: Evaluation of the proposed method indicates promising results and henceforth validates its clinical reliability. The future direction could be working on a larger dataset in close collaboration with the medical fraternity.


2021 ◽  
Vol 15 (1) ◽  
pp. 132-140
Author(s):  
Hiren Mewada ◽  
Jawad F. Al-Asad ◽  
Amit Patel ◽  
Jitendra Chaudhari ◽  
Keyur Mahant ◽  
...  

Background: The advancement in convolutional neural network (CNN) has reduced the burden of experts using the computer-aided diagnosis of human breast cancer. However, most CNN networks use spatial features only. The inherent texture structure present in histopathological images plays an important role in distinguishing malignant tissues. This paper proposes an alternate CNN network that integrates Local Binary Pattern (LBP) based texture information with CNN features. Methods: The study propagates that LBP provides the most robust rotation, and translation-invariant features in comparison with other texture feature extractors. Therefore, a formulation of LBP in context of convolution operation is presented and used in the proposed CNN network. A non-trainable fixed set binary convolutional filters representing LBP features are combined with trainable convolution filters to approximate the response of the convolution layer. A CNN architecture guided by LBP features is used to classify the histopathological images. Result: The network is trained using BreKHis datasets. The use of a fixed set of LBP filters reduces the burden of CNN by minimizing training parameters by a factor of 9. This makes it suitable for the environment with fewer resources. The proposed network obtained 96.46% of maximum accuracy with 98.51% AUC and 97% F1-score. Conclusion: LBP based texture information plays a vital role in cancer image classification. A multi-channel LBP futures fusion is used in the CNN network. The experiment results propagate that the new structure of LBP-guided CNN requires fewer training parameters preserving the capability of the CNN network’s classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document