Bioinformatics-Inspired Algorithms for 2D-Image Analysis-Application to Synthetic and Medical Images Part I

2012 ◽  
Vol 1 (1) ◽  
pp. 14-38
Author(s):  
Perambur S. Neelakanta ◽  
Deepti Pappusetty

To ascertain specific features in bio-/medical-images, a new avenue of using the so-called Needleman-Wunsch (NW) and Smith-Waterman (SW) algorithms (of bioinformatics) is indicated. In general, NW/SW algorithms are adopted in genomic science to obtain optimal (global and local) alignment of two linear sequences (like DNA nucleotide bases) to determine the similarity features between them and such 1D-sequence algorithms are presently extended to compare 2D-images via binary correlation. The efficacy of the proposed method is tested with synthetic images and a brain scan image. Thus, the way of finding the location of a distinct part in a synthetic image and that of a tumour in the brain scan image is demonstrated.

2019 ◽  
Author(s):  
Iris Berent ◽  
Melanie Platt

Recent results suggest that people hold a notion of the true self, distinct from the self. Here, we seek to further elucidate the “true me”—whether it is good or bad, material or immaterial. Critically, we ask whether the true self is unitary. To address these questions, we invited participants to reason about John—a character who simultaneously exhibits both positive and negative moral behaviors. John’s character was gauged via two tests--a brain scan and a behavioral test, whose results invariably diverged (i.e., one test indicated that John’s moral core is positive and another negative). Participants assessed John’s true self along two questions: (a) Did John commit his acts (positive and negative) freely? and (b) What is John’s essence really? Responses to the two questions diverged. When asked to evaluate John’s moral core explicitly (by reasoning about his free will), people invariably descried John’s true self as good. But when John’s moral core was assessed implicitly (by considering his essence), people sided with the outcomes of the brain test. These results demonstrate that people hold conflicting notions of the true self. We formally support this proposal by presenting a grammar of the true self, couched within Optimality Theory. We show that the constraint ranking necessary to capture explicit and implicit view of the true self are distinct. Our intuitive belief in a true unitary “me” is thus illusory.


2020 ◽  
Vol 13 (5) ◽  
pp. 999-1007
Author(s):  
Karthikeyan Periyasami ◽  
Arul Xavier Viswanathan Mariammal ◽  
Iwin Thanakumar Joseph ◽  
Velliangiri Sarveshwaran

Background: Medical image analysis application has complex resource requirement. Scheduling Medical image analysis application is the complex task to the grid resources. It is necessary to develop a new model to improve the breast cancer screening process. Proposed novel Meta scheduler algorithm allocate the image analyse applications to the local schedulers and local scheduler submit the job to the grid node which analyses the medical image and generates the result sent back to Meta scheduler. Meta schedulers are distinct from the local scheduler. Meta scheduler and local scheduler have the aim at resource allocation and management. Objective: The main objective of the CDAM meta-scheduler is to maximize the number of jobs accepted. Methods: In the beginning, the user sends jobs with the deadline to the global grid resource broker. Resource providers sent information about the available resources connected in the network at a fixed interval of time to the global grid resource broker, the information such as valuation of the resource and number of an available free resource. CDAM requests the global grid resource broker for available resources details and user jobs. After receiving the information from the global grid resource broker, it matches the job with the resources. CDAM sends jobs to the local scheduler and local scheduler schedule the job to the local grid site. Local grid site executes the jobs and sends the result back to the CDAM. Success full completion of the job status and resource status are updated into the auction history database. CDAM collect the result from all local grid site and return to the grid users. Results: The CDAM was simulated using grid simulator. Number of jobs increases then the percentage of the jobs accepted also decrease due to the scarcity of resources. CDAM is providing 2% to 5% better result than Fair share Meta scheduling algorithm. CDAM algorithm bid density value is generated based on the user requirement and user history and ask value is generated from the resource details. Users who, having the most significant deadline are generated the highest bid value, grid resource which is having the fastest processor are generated lowest ask value. The highest bid is assigned to the lowest Ask it means that the user who is having the most significant deadline is assigned to the grid resource which is having the fastest processor. The deadline represents a time by which the user requires the result. The user can define the deadline by which the results are needed, and the CDAM will try to find the fastest resource available in order to meet the user-defined deadline. If the scheduler detects that the tasks cannot be completed before the deadline, then the scheduler abandons the current resource, tries to select the next fastest resource and tries until the completion of application meets the deadline. CDAM is providing 25% better result than grid way Meta scheduler this is because grid way Meta scheduler allocate jobs to the resource based on the first come first served policy. Conclusion: The proposed CDAM model was validated through simulation and was evaluated based on jobs accepted. The experimental results clearly show that the CDAM model maximizes the number of jobs accepted than conventional Meta scheduler. We conclude that a CDAM is highly effective meta-scheduler systems and can be used for an extraordinary situation where jobs have a combinatorial requirement.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Julian Bär ◽  
Mathilde Boumasmoud ◽  
Roger D. Kouyos ◽  
Annelies S. Zinkernagel ◽  
Clément Vulin

An amendment to this paper has been published and can be accessed via a link at the top of the paper.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Johan Baijot ◽  
Stijn Denissen ◽  
Lars Costers ◽  
Jeroen Gielen ◽  
Melissa Cambron ◽  
...  

AbstractGraph-theoretical analysis is a novel tool to understand the organisation of the brain.We assessed whether altered graph theoretical parameters, as observed in multiple sclerosis (MS), reflect pathology-induced restructuring of the brain's functioning or result from a reduced signal quality in functional MRI (fMRI). In a cohort of 49 people with MS and a matched group of 25 healthy subjects (HS), we performed a cognitive evaluation and acquired fMRI. From the fMRI measurement, Pearson correlation-based networks were calculated and graph theoretical parameters reflecting global and local brain organisation were obtained. Additionally, we assessed metrics of scanning quality (signal to noise ratio (SNR)) and fMRI signal quality (temporal SNR and contrast to noise ratio (CNR)). In accordance with the literature, we found that the network parameters were altered in MS compared to HS. However, no significant link was found with cognition. Scanning quality (SNR) did not differ between both cohorts. In contrast, measures of fMRI signal quality were significantly different and explained the observed differences in GTA parameters. Our results suggest that differences in network parameters between MS and HS in fMRI do not reflect a functional reorganisation of the brain, but rather occur due to reduced fMRI signal quality.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


Radiology ◽  
1966 ◽  
Vol 86 (6) ◽  
pp. 1082-1084 ◽  
Author(s):  
Juan Roig ◽  
William T. Moss ◽  
James L. Quinn

2014 ◽  
Vol 70 (6) ◽  
pp. 955-963 ◽  
Author(s):  
Ewa Liwarska-Bizukojc ◽  
Marcin Bizukojc ◽  
Olga Andrzejczak

Quantification of filamentous bacteria in activated sludge systems can be made by manual counting under a microscope or by the application of various automated image analysis procedures. The latter has been significantly developed in the last two decades. In this work a new method based upon automated image analysis techniques was elaborated and presented. It consisted of three stages: (a) Neisser staining, (b) grabbing of microscopic images, and (c) digital image processing and analysis. This automated image analysis procedure possessed the features of novelty. It simultaneously delivered data about aggregates and filaments in an individual calculation routine, which is seldom met in the procedures described in the literature so far. What is more important, the macroprogram performing image processing and calculation of morphological parameters was written in the same software which was used for grabbing of images. Previously published procedures required using two different types of software, one for image grabbing and another one for image processing and analysis. Application of this new procedure for the quantification of filamentous bacteria in the full-scale as well as laboratory activated sludge systems proved that it was simple, fast and delivered reliable results.


2009 ◽  
Vol 17 (2) ◽  
Author(s):  
L. Ogiela

AbstractThe main subject of this publication is to present a selected class of cognitive categorisation systems - understanding based image analysis systems (UBIAS) which support analyses of data recorded in the form of images. Cognitive categorisation systems operate by following particular type of thought, cognitive, and reasoning processes which take place in a human mind and which ultimately lead to making an in-depth description of the analysis and reasoning process. The most important element in this analysis and reasoning process is that it occurs both in the human ability cognitive/thinking process and in the system’s information/reasoning process that conducts the in-depth interpretation and analysis of data.


Sign in / Sign up

Export Citation Format

Share Document