scholarly journals Continuum microhaemodynamics modelling using inverse rheology

Author(s):  
Joseph van Batenburg-Sherwood ◽  
Stavroula Balabani

AbstractModelling blood flow in microvascular networks is challenging due to the complex nature of haemorheology. Zero- and one-dimensional approaches cannot reproduce local haemodynamics, and models that consider individual red blood cells (RBCs) are prohibitively computationally expensive. Continuum approaches could provide an efficient solution, but dependence on a large parameter space and scarcity of experimental data for validation has limited their application. We describe a method to assimilate experimental RBC velocity and concentration data into a continuum numerical modelling framework. Imaging data of RBCs were acquired in a sequentially bifurcating microchannel for various flow conditions. RBC concentration distributions were evaluated and mapped into computational fluid dynamics simulations with rheology prescribed by the Quemada model. Predicted velocities were compared to particle image velocimetry data. A subset of cases was used for parameter optimisation, and the resulting model was applied to a wider data set to evaluate model efficacy. The pre-optimised model reduced errors in predicted velocity by 60% compared to assuming a Newtonian fluid, and optimisation further reduced errors by 40%. Asymmetry of RBC velocity and concentration profiles was demonstrated to play a critical role. Excluding asymmetry in the RBC concentration doubled the error, but excluding spatial distributions of shear rate had little effect. This study demonstrates that a continuum model with optimised rheological parameters can reproduce measured velocity if RBC concentration distributions are known a priori. Developing this approach for RBC transport with more network configurations has the potential to provide an efficient approach for modelling network-scale haemodynamics.

Author(s):  
Daniel Butcher ◽  
Adrian Spencer

Abstract With increasing complexity of aerodynamic devices such as gas turbine fuel swirl nozzles (FSN) and combustors, the need for time-resolved full volume flow characterisation is becoming greater. Even with modern advancements in both numerical and experimental methods, this remains a challenging area. The work presented in this paper combines multiple non-synchronous planar measurements to reconstruct an estimate of a synchronous, instantaneous flow field of the whole measurement set. Temporal information is retained through the linear stochastic estimation (LSE) technique. The technique is described, applied and validated with a simplified combustor and FSN geometry flow for which 3-component, 3-dimensional (3C3D) flow information is known from published tomographic PIV[1]. Using the tomographic PIV data set, multiple virtual ‘planes’ may be extracted to emulate single planar PIV measurements and produce the correlations required for LSE. In this example, multiple parallel planes are synchronised with a single perpendicular plane that intersects each of them. As the underlying data set is volumetric, the measured velocity is known a priori and therefore can be directly compared to the estimated velocity field for validation purposes. The work shows that when the input time-resolved planar velocity measurements are first POD (proper orthogonal decomposition) filtered, high correlation between the estimations and the validation velocity volumes are possible. This results in estimated full volume velocity distributions which are available at the same time instance as the input field — i.e. a time resolved velocity estimation at the frequency of the single input plane. While 3C3D information is used in the presented work, this is necessary only for validation; in true application planar technique would be used. The study concludes that provided the number of sensors used for input LSE exceeds the number of POD modes used for pre-filtering, it is possible to achieve correlation greater than 99%.


2021 ◽  
pp. 108602662110316
Author(s):  
Tiziana Russo-Spena ◽  
Nadia Di Paola ◽  
Aidan O’Driscoll

An effective climate change action involves the critical role that companies must play in assuring the long-term human and social well-being of future generations. In our study, we offer a more holistic, inclusive, both–and approach to the challenge of environmental innovation (EI) that uses a novel methodology to identify relevant configurations for firms engaging in a superior EI strategy. A conceptual framework is proposed that identifies six sets of driving characteristics of EI and two sets of beneficial outcomes, all inherently tensional. Our analysis utilizes a complementary rather than an oppositional point of view. A data set of 65 companies in the ICT value chain is analyzed via fuzzy-set comparative analysis (fsQCA) and a post-QCA procedure. The results reveal that achieving a superior EI strategy is possible in several scenarios. Specifically, after close examination, two main configuration groups emerge, referred to as technological environmental innovators and organizational environmental innovators.


Author(s):  
Laure Fournier ◽  
Lena Costaridou ◽  
Luc Bidaut ◽  
Nicolas Michoux ◽  
Frederic E. Lecouvet ◽  
...  

Abstract Existing quantitative imaging biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials. Key Points • Data-driven processes like radiomics risk false discoveries due to high-dimensionality of the dataset compared to sample size, making adequate diversity of the data, cross-validation and external validation essential to mitigate the risks of spurious associations and overfitting. • Use of radiomic signatures within clinical trials requires multistep standardisation of image acquisition, image analysis and data mining processes. • Biological correlation may be established after clinical validation but is not mandatory.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
◽  
Elmar Kotter ◽  
Luis Marti-Bonmati ◽  
Adrian P. Brady ◽  
Nandita M. Desouza

AbstractBlockchain can be thought of as a distributed database allowing tracing of the origin of data, and who has manipulated a given data set in the past. Medical applications of blockchain technology are emerging. Blockchain has many potential applications in medical imaging, typically making use of the tracking of radiological or clinical data. Clinical applications of blockchain technology include the documentation of the contribution of different “authors” including AI algorithms to multipart reports, the documentation of the use of AI algorithms towards the diagnosis, the possibility to enhance the accessibility of relevant information in electronic medical records, and a better control of users over their personal health records. Applications of blockchain in research include a better traceability of image data within clinical trials, a better traceability of the contributions of image and annotation data for the training of AI algorithms, thus enhancing privacy and fairness, and potentially make imaging data for AI available in larger quantities. Blockchain also allows for dynamic consenting and has the potential to empower patients and giving them a better control who has accessed their health data. There are also many potential applications of blockchain technology for administrative purposes, like keeping track of learning achievements or the surveillance of medical devices. This article gives a brief introduction in the basic technology and terminology of blockchain technology and concentrates on the potential applications of blockchain in medical imaging.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zhixiang Yu ◽  
Haiyan He ◽  
Yanan Chen ◽  
Qiuhe Ji ◽  
Min Sun

AbstractOvarian cancer (OV) is a common type of carcinoma in females. Many studies have reported that ferroptosis is associated with the prognosis of OV patients. However, the mechanism by which this occurs is not well understood. We utilized Genotype-Tissue Expression (GTEx) and The Cancer Genome Atlas (TCGA) to identify ferroptosis-related genes in OV. In the present study, we applied Cox regression analysis to select hub genes and used the least absolute shrinkage and selection operator to construct a prognosis prediction model with mRNA expression profiles and clinical data from TCGA. A series of analyses for this signature was performed in TCGA. We then verified the identified signature using International Cancer Genome Consortium (ICGC) data. After a series of analyses, we identified six hub genes (DNAJB6, RB1, VIMP/ SELENOS, STEAP3, BACH1, and ALOX12) that were then used to construct a model using a training data set. The model was then tested using a validation data set and was found to have high sensitivity and specificity. The identified ferroptosis-related hub genes might play a critical role in the mechanism of OV development. The gene signature we identified may be useful for future clinical applications.


2015 ◽  
Vol 8 (2) ◽  
pp. 941-963 ◽  
Author(s):  
T. Vlemmix ◽  
F. Hendrick ◽  
G. Pinardi ◽  
I. De Smedt ◽  
C. Fayt ◽  
...  

Abstract. A 4-year data set of MAX-DOAS observations in the Beijing area (2008–2012) is analysed with a focus on NO2, HCHO and aerosols. Two very different retrieval methods are applied. Method A describes the tropospheric profile with 13 layers and makes use of the optimal estimation method. Method B uses 2–4 parameters to describe the tropospheric profile and an inversion based on a least-squares fit. For each constituent (NO2, HCHO and aerosols) the retrieval outcomes are compared in terms of tropospheric column densities, surface concentrations and "characteristic profile heights" (i.e. the height below which 75% of the vertically integrated tropospheric column density resides). We find best agreement between the two methods for tropospheric NO2 column densities, with a standard deviation of relative differences below 10%, a correlation of 0.99 and a linear regression with a slope of 1.03. For tropospheric HCHO column densities we find a similar slope, but also a systematic bias of almost 10% which is likely related to differences in profile height. Aerosol optical depths (AODs) retrieved with method B are 20% high compared to method A. They are more in agreement with AERONET measurements, which are on average only 5% lower, however with considerable relative differences (standard deviation ~ 25%). With respect to near-surface volume mixing ratios and aerosol extinction we find considerably larger relative differences: 10 ± 30, −23 ± 28 and −8 ± 33% for aerosols, HCHO and NO2 respectively. The frequency distributions of these near-surface concentrations show however a quite good agreement, and this indicates that near-surface concentrations derived from MAX-DOAS are certainly useful in a climatological sense. A major difference between the two methods is the dynamic range of retrieved characteristic profile heights which is larger for method B than for method A. This effect is most pronounced for HCHO, where retrieved profile shapes with method A are very close to the a priori, and moderate for NO2 and aerosol extinction which on average show quite good agreement for characteristic profile heights below 1.5 km. One of the main advantages of method A is the stability, even under suboptimal conditions (e.g. in the presence of clouds). Method B is generally more unstable and this explains probably a substantial part of the quite large relative differences between the two methods. However, despite a relatively low precision for individual profile retrievals it appears as if seasonally averaged profile heights retrieved with method B are less biased towards a priori assumptions than those retrieved with method A. This gives confidence in the result obtained with method B, namely that aerosol extinction profiles tend on average to be higher than NO2 profiles in spring and summer, whereas they seem on average to be of the same height in winter, a result which is especially relevant in relation to the validation of satellite retrievals.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. F25-F34 ◽  
Author(s):  
Benoit Tournerie ◽  
Michel Chouteau ◽  
Denis Marcotte

We present and test a new method to correct for the static shift affecting magnetotelluric (MT) apparent resistivity sounding curves. We use geostatistical analysis of apparent resistivity and phase data for selected periods. For each period, we first estimate and model the experimental variograms and cross variogram between phase and apparent resistivity. We then use the geostatistical model to estimate, by cokriging, the corrected apparent resistivities using the measured phases and apparent resistivities. The static shift factor is obtained as the difference between the logarithm of the corrected and measured apparent resistivities. We retain as final static shift estimates the ones for the period displaying the best correlation with the estimates at all periods. We present a 3D synthetic case study showing that the static shift is retrieved quite precisely when the static shift factors are uniformly distributed around zero. If the static shift distribution has a nonzero mean, we obtained best results when an apparent resistivity data subset can be identified a priori as unaffected by static shift and cokriging is done using only this subset. The method has been successfully tested on the synthetic COPROD-2S2 2D MT data set and on a 3D-survey data set from Las Cañadas Caldera (Tenerife, Canary Islands) severely affected by static shift.


Paleobiology ◽  
2016 ◽  
Vol 43 (1) ◽  
pp. 68-84 ◽  
Author(s):  
Bradley Deline ◽  
William I. Ausich

AbstractA priori choices in the detail and breadth of a study are important in addressing scientific hypotheses. In particular, choices in the number and type of characters can greatly influence the results in studies of morphological diversity. A new character suite was constructed to examine trends in the disparity of early Paleozoic crinoids. Character-based rarefaction analysis indicated that a small subset of these characters (~20% of the complete data set) could be used to capture most of the properties of the entire data set in analyses of crinoids as a whole, noncamerate crinoids, and to a lesser extent camerate crinoids. This pattern may be the result of the covariance between characters and the characterization of rare morphologies that are not represented in the primary axes in morphospace. Shifting emphasis on different body regions (oral system, calyx, periproct system, and pelma) also influenced estimates of relative disparity between subclasses of crinoids. Given these results, morphological studies should include a pilot analysis to better examine the amount and type of data needed to address specific scientific hypotheses.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Sandeep Kumar Dhanda ◽  
Sudheer Gupta ◽  
Pooja Vir ◽  
G. P. S. Raghava

The secretion of Interleukin-4 (IL4) is the characteristic of T-helper 2 responses. IL4 is a cytokine produced by CD4+ T cells in response to helminthes and other extracellular parasites. It has a critical role in guiding antibody class switching, hematopoiesis and inflammation, and the development of appropriate effector T-cell responses. In this study, it is the first time an attempt has been made to understand whether it is possible to predict IL4 inducing peptides. The data set used in this study comprises 904 experimentally validated IL4 inducing and 742 noninducing MHC class II binders. Our analysis revealed that certain types of residues are preferred at certain positions in IL4 inducing peptides. It was also observed that IL4 inducing and noninducing epitopes differ in compositional and motif pattern. Based on our analysis we developed classification models where the hybrid method of amino acid pairs and motif information performed the best with maximum accuracy of 75.76% and MCC of 0.51. These results indicate that it is possible to predict IL4 inducing peptides with reasonable precession. These models would be useful in designing the peptides that may induce desired Th2 response.


Sign in / Sign up

Export Citation Format

Share Document