Fast registration of segmented images by normal sampling

Author(s):  
Jan Kybic ◽  
Martin Dolejsi ◽  
Jiri Borovec
Keyword(s):  
2021 ◽  
pp. 096228022098354
Author(s):  
N Satyanarayana Murthy ◽  
B Arunadevi

Diabetic retinopathy (DR) stays as an eye issue that has continuously developed in individuals who experienced diabetes. The complexities in diabetes cause harm to the vein at the back of the retina. In outrageous cases, DR could swift apparition disaster or visual impairment. This genuine impact had the option to charge through convenient treatment and early recognition. As of late, this issue has been spreading quickly, particularly in the working region, which in the end constrained the interest of an analysis of this disease from the most prompt stage. Therefore, that are castoff to protect the progressions of this disorder, revealing of the retinal blood vessels (RBVs) play a foremost role. The growth of an abnormal vessel leads to the development steps of DR, where it can be well known by extracting the RBV. The recognition of the BV for DR by developing an automatic approach is a major aim of our research study. In the proposed method, there are two major steps: one is segmentation and the second one is classification of affected retinal BV. The proposed method uses the Kinetic Gas Molecule Optimization based on centroid initialization used for the Fuzzy C-means Clustering. In the classification step, those segmented images are given as input to hybrid techniques such as a convolution neural network with bidirectional-long short-term memory (CNN with Bi-LSTM). The learning degree of Bi-LSTM is revised by using the self-attention mechanism for refining the classification accuracy. The trial consequences disclosed that the mixture algorithm achieved higher accuracy, specificity, and sensitivity than existing techniques.


2021 ◽  
Vol 11 (11) ◽  
pp. 4999
Author(s):  
Chung-Yoh Kim ◽  
Jin-Seo Park ◽  
Beom-Sun Chung

When performing deep brain stimulation (DBS) of the subthalamic nucleus, practitioners should interpret the magnetic resonance images (MRI) correctly so they can place the DBS electrode accurately at the target without damaging the other structures. The aim of this study is to provide a real color volume model of a cadaver head that would help medical students and practitioners to better understand the sectional anatomy of DBS surgery. Sectioned images of a cadaver head were reconstructed into a real color volume model with a voxel size of 0.5 mm × 0.5 mm × 0.5 mm. According to preoperative MRIs and postoperative computed tomographys (CT) of 31 patients, a virtual DBS electrode was rendered on the volume model of a cadaver. The volume model was sectioned at the classical and oblique planes to produce real color images. In addition, segmented images of a cadaver head were formed into volume models. On the classical and oblique planes, the anatomical structures around the course of the DBS electrode were identified. The entry point, waypoint, target point, and nearby structures where the DBS electrode could be misplaced were also elucidated. The oblique planes could be understood concretely by comparing the volume model of the sectioned images with that of the segmented images. The real color and high resolution of the volume model enabled observations of minute structures even on the oblique planes. The volume models can be downloaded by users to be correlated with other patients’ data for grasping the anatomical orientation.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Rajesh Kumar ◽  
Rajeev Srivastava ◽  
Subodh Srivastava

A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.


Author(s):  
Udit Jindal ◽  
Sheifali Gupta

Agriculture contributes majorly to all nations' economies, but crop diseases are now becoming a very big issue that has to be resolving immediately. Because of this, crop/plant disease detection becomes a very significant area to work. However, a huge number of studies have been done for automatic disease detection using machine learning, but less work has been done using deep learning with efficient results. The research article presents a convolution neural network for plant disease detection by using open access ‘PlantVillage' dataset for three versions that are colored, grayscale, and segmented images. The dataset consists of 54,305 images and is being used to train a model that will be able to detect disease present in edible plants. The proposed neural network achieved the testing accuracy of 99.27%, 98.04%, and 99.14% for colored, grayscale, and segmented images, respectively. The work also presents better precision and recall rates on colored image datasets.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Caio B. Wetterich ◽  
Ratnesh Kumar ◽  
Sindhuja Sankaran ◽  
José Belasque Junior ◽  
Reza Ehsani ◽  
...  

The overall objective of this work was to develop and evaluate computer vision and machine learning technique for classification of Huanglongbing-(HLB)-infected and healthy leaves using fluorescence imaging spectroscopy. The fluorescence images were segmented using normalized graph cut, and texture features were extracted from the segmented images using cooccurrence matrix. The extracted features were used as an input into the classifier, support vector machine (SVM). The classification results were evaluated based on classification accuracies and number of false positives and false negatives. The results indicated that the SVM could classify HLB-infected leaf fluorescence intensities with up to 90% classification accuracy. Though the fluorescence intensities from leaves collected in Brazil and the USA were different, the method shows potential for detecting HLB.


2021 ◽  
Vol 7 (10) ◽  
pp. 850
Author(s):  
Veena Mayya ◽  
Sowmya Kamath Shevgoor ◽  
Uma Kulkarni ◽  
Manali Hazarika ◽  
Prabal Datta Barua ◽  
...  

Microbial keratitis is an infection of the cornea of the eye that is commonly caused by prolonged contact lens wear, corneal trauma, pre-existing systemic disorders and other ocular surface disorders. It can result in severe visual impairment if improperly managed. According to the latest World Vision Report, at least 4.2 million people worldwide suffer from corneal opacities caused by infectious agents such as fungi, bacteria, protozoa and viruses. In patients with fungal keratitis (FK), often overt symptoms are not evident, until an advanced stage. Furthermore, it has been reported that clear discrimination between bacterial keratitis and FK is a challenging process even for trained corneal experts and is often misdiagnosed in more than 30% of the cases. However, if diagnosed early, vision impairment can be prevented through early cost-effective interventions. In this work, we propose a multi-scale convolutional neural network (MS-CNN) for accurate segmentation of the corneal region to enable early FK diagnosis. The proposed approach consists of a deep neural pipeline for corneal region segmentation followed by a ResNeXt model to differentiate between FK and non-FK classes. The model trained on the segmented images in the region of interest, achieved a diagnostic accuracy of 88.96%. The features learnt by the model emphasize that it can correctly identify dominant corneal lesions for detecting FK.


2020 ◽  
Vol 6 ◽  
Author(s):  
David Owen ◽  
Laurence Livermore ◽  
Quentin Groom ◽  
Alex Hardisty ◽  
Thijs Leegwater ◽  
...  

We describe an effective approach to automated text digitisation with respect to natural history specimen labels. These labels contain much useful data about the specimen including its collector, country of origin, and collection date. Our approach to automatically extracting these data takes the form of a pipeline. Recommendations are made for the pipeline's component parts based on some of the state-of-the-art technologies. Optical Character Recognition (OCR) can be used to digitise text on images of specimens. However, recognising text quickly and accurately from these images can be a challenge for OCR. We show that OCR performance can be improved by prior segmentation of specimen images into their component parts. This ensures that only text-bearing labels are submitted for OCR processing as opposed to whole specimen images, which inevitably contain non-textual information that may lead to false positive readings. In our testing Tesseract OCR version 4.0.0 offers promising text recognition accuracy with segmented images. Not all the text on specimen labels is printed. Handwritten text varies much more and does not conform to standard shapes and sizes of individual characters, which poses an additional challenge for OCR. Recently, deep learning has allowed for significant advances in this area. Google's Cloud Vision, which is based on deep learning, is trained on large-scale datasets, and is shown to be quite adept at this task. This may take us some way towards negating the need for humans to routinely transcribe handwritten text. Determining the countries and collectors of specimens has been the goal of previous automated text digitisation research activities. Our approach also focuses on these two pieces of information. An area of Natural Language Processing (NLP) known as Named Entity Recognition (NER) has matured enough to semi-automate this task. Our experiments demonstrated that existing approaches can accurately recognise location and person names within the text extracted from segmented images via Tesseract version 4.0.0. Potentially, NER could be used in conjunction with other online services, such as those of the Biodiversity Heritage Library to map the named entities to entities in the biodiversity literature (https://www.biodiversitylibrary.org/docs/api3.html). We have highlighted the main recommendations for potential pipeline components. The document also provides guidance on selecting appropriate software solutions. These include automatic language identification, terminology extraction, and integrating all pipeline components into a scientific workflow to automate the overall digitisation process.


2018 ◽  
Vol 27 (4) ◽  
pp. 681-697
Author(s):  
Lawrence Livingston Godlin Atlas ◽  
Kumar Parasuraman

Abstract The main objective of this study is to progress the structure and segment the images from hemorrhage recognition in retinal fundus images in ostensible. The abnormal bleeding of blood vessels in the retina which is the membrane in the back of the eye is called retinal hemorrhage. The image folders are deliberated, and the filter technique is utilized to decrease the images specifically adaptive median filter in our suggested proposal. Gray level co-occurrence matrix (GLCM), grey level run length matrix (GLRLM) and Scale invariant feature transform (SIFT) feature skills are present after filtrating the feature withdrawal. After this, the organization technique is performed, specifically artificial neural network with fuzzy interface system (ANFIS) method; with the help of this organization, exaggerated and non-affected images are categorized. Affected hemorrhage images are transpired for segmentation procedure, and in this exertion, threshold optimization is measured with numerous optimization methods; on the basis of this, particle swarm optimization is accomplished in improved manner. Consequently, the segmented images are projected, and the sensitivity is great when associating with accurateness and specificity in the MATLAB platform.


Author(s):  
Bálint Daróczy ◽  
Zsolt Fekete ◽  
Mátyás Brendel ◽  
Simon Rácz ◽  
András Benczúr ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document