biomedical modeling
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 1)

2020 ◽  
Vol 52 (1) ◽  
pp. 421-448 ◽  
Author(s):  
Boyce E. Griffith ◽  
Neelesh A. Patankar

Fluid–structure interaction is ubiquitous in nature and occurs at all biological scales. Immersed methods provide mathematical and computational frameworks for modeling fluid–structure systems. These methods, which typically use an Eulerian description of the fluid and a Lagrangian description of the structure, can treat thin immersed boundaries and volumetric bodies, and they can model structures that are flexible or rigid or that move with prescribed deformational kinematics. Immersed formulations do not require body-fitted discretizations and thereby avoid the frequent grid regeneration that can otherwise be required for models involving large deformations and displacements. This article reviews immersed methods for both elastic structures and structures with prescribed kinematics. It considers formulations using integral operators to connect the Eulerian and Lagrangian frames and methods that directly apply jump conditions along fluid–structure interfaces. Benchmark problems demonstrate the effectiveness of these methods, and selected applications at Reynolds numbers up to approximately 20,000 highlight their impact in biological and biomedical modeling and simulation.


2018 ◽  
Author(s):  
David W Wright ◽  
Robin A Richardson ◽  
Peter V Coveney

The concept underlying precision medicine is that prevention, diagnosis and treatment of pathologies such as cancer can be improved through an understanding of the influence of individual patient characteristics. Predictive medicine seeks to derive this understanding through mechanistic models of the causes and (potential) progression of diseases within a given individual. This represents a grand challenge for computational biomedicine as it requires the integration of highly varied (and potentially vast) quantitative experimental datasets into models of complex biological systems. It is becoming increasingly clear that this challenge can only be answered through the use of complex workflows that combine diverse analyses and whose design is informed by an understanding of how predictions must be accompanied by estimates of uncertainty. Each stage in such a workflow can, in general, have very different computational requirements. If funding bodies and the HPC community are serious about the desire to support such approaches, they must consider the need for portable, persistent and stable tools designed to promote extensive long term development and testing of these workflows. From the perspective of model developers (and with even greater relevance to potential clinical or experimental collaborators) the enormous diversity of interfaces and supercomputer policies, frequently designed with monolithic applications in mind, can represent a serious barrier to innovation. Here we use experiences from work on two very different biomedical modeling scenarios - brain bloodflow and small molecule drug selection - to highlight issues with the current programming and execution environments and suggest potential solutions.


2018 ◽  
Author(s):  
David W Wright ◽  
Robin A Richardson ◽  
Peter V Coveney

The concept underlying precision medicine is that prevention, diagnosis and treatment of pathologies such as cancer can be improved through an understanding of the influence of individual patient characteristics. Predictive medicine seeks to derive this understanding through mechanistic models of the causes and (potential) progression of diseases within a given individual. This represents a grand challenge for computational biomedicine as it requires the integration of highly varied (and potentially vast) quantitative experimental datasets into models of complex biological systems. It is becoming increasingly clear that this challenge can only be answered through the use of complex workflows that combine diverse analyses and whose design is informed by an understanding of how predictions must be accompanied by estimates of uncertainty. Each stage in such a workflow can, in general, have very different computational requirements. If funding bodies and the HPC community are serious about the desire to support such approaches, they must consider the need for portable, persistent and stable tools designed to promote extensive long term development and testing of these workflows. From the perspective of model developers (and with even greater relevance to potential clinical or experimental collaborators) the enormous diversity of interfaces and supercomputer policies, frequently designed with monolithic applications in mind, can represent a serious barrier to innovation. Here we use experiences from work on two very different biomedical modeling scenarios - brain bloodflow and small molecule drug selection - to highlight issues with the current programming and execution environments and suggest potential solutions.


2018 ◽  
Author(s):  
Marouen Ben Guebila

AbstractGenome-scale metabolic models (GSMMs) of living organisms are used in a wide variety of applications pertaining to health and bioengineering. They are formulated as linear programs (LP) that are often under-determined. Flux Variability Analysis (FVA) characterizes the alternate optimal solution (AOS) space enabling thereby the assessment of the robustness of the solution. fastFVA (FFVA), the C implementation of MATLAB FVA, allowed to gain substantial speed up, although, the parallelism was managed through MATLAB. Here veryfastFVA (VFFVA) is presented, which is a pure C implementation of FVA, that relies on lower level management of parallelism through a hybrid MPI/OpenMP. The flexibility of VFFVA allowed to gain a threefold speedup factor and to decrease memory usage 14 fold in comparison to FFVA. Finally, VFFVA allows processing a higher number of GSMMs in faster times accelerating thereby biomedical modeling and simulation. VFFVA is available online at https://github.com/marouenbg/VFFVA.


2013 ◽  
Vol 75 (8) ◽  
pp. 1233-1237
Author(s):  
Mark Alber ◽  
Philip K. Maini ◽  
Glen Niebur
Keyword(s):  

2012 ◽  
pp. 724-768
Author(s):  
Jesmin Nahar ◽  
Kevin S. Tickle ◽  
A. B.M. Shawkat Ali

Extracting useful information from structured and unstructured biological data is crucial in the health industry. Some examples include medical practitioner’s need to identify breast cancer patient in the early stage, estimate survival time of a heart disease patient, or recognize uncommon disease characteristics which suddenly appear. Currently there is an explosion in biological data available in the data bases. But information extraction and true open access to data are require time to resolve issues such as ethical clearance. The emergence of novel IT technologies allows health practitioners to facilitate the comprehensive analyses of medical images, genomes, transcriptomes, and proteomes in health and disease. The information that is extracted from such technologies may soon exert a dramatic change in the pace of medical research and impact considerably on the care of patients. The current research will review the existing technologies being used in heart and cancer research. Finally this research will provide some possible solutions to overcome the limitations of existing technologies. In summary the primary objective of this research is to investigate how existing modern machine learning techniques (with their strength and limitations) are being used in the indent of heartbeat related disease and the early detection of cancer in patients. After an extensive literature review these are the objectives chosen: to develop a new approach to find the association between diseases such as high blood pressure, stroke and heartbeat, to propose an improved feature selection method to analyze huge images and microarray databases for machine learning algorithms in cancer research, to find an automatic distance function selection method for clustering tasks, to discover the most significant risk factors for specific cancers, and to determine the preventive factors for specific cancers that are aligned with the most significant risk factors. Therefore we propose a research plan to attain these objectives within this chapter. The possible solutions of the above objectives are: new heartbeat identification techniques show promising association with the heartbeat patterns and diseases, sensitivity based feature selection methods will be applied to early cancer patient classification, meta learning approaches will be adopted in clustering algorithms to select an automatic distance function, and Apriori algorithm will be applied to discover the significant risks and preventive factors for specific cancers. We expect this research will add significant contributions to the medical professional to enable more accurate diagnosis and better patient care. It will also contribute in other area such as biomedical modeling, medical image analysis and early diseases warning.


2011 ◽  
Vol 8 (10) ◽  
pp. 588-599 ◽  
Author(s):  
Catherine O’Brien ◽  
Laurie A. Blanchard ◽  
Bruce S. Cadarette ◽  
Thomas L. Endrusick ◽  
Xiaojiang Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document