scholarly journals A Roadmap towards Precision Periodontics

Medicina ◽  
2021 ◽  
Vol 57 (3) ◽  
pp. 233
Author(s):  
Mia Rakic ◽  
Natasa Pejcic ◽  
Neda Perunovic ◽  
Danilo Vojvodic

Periodontitis is among the most common health conditions and represents a major public health issue related to increasing prevalence and seriously negative socioeconomic impacts. Periodontitis-associated low-grade systemic inflammation and its pathological interplay with systemic conditions additionally raises awareness on the necessity for highly performant strategies for the prevention and management of periodontitis. Periodontal diagnosis is the backbone of a successful periodontal strategy, since prevention and treatment plans depend on the accuracy and precision of the respective diagnostics. Periodontal diagnostics is still founded on clinical and radiological parameters that provide limited therapeutic guidance due to the multifactorial complexity of periodontal pathology, which is why biomarkers have been introduced for the first time in the new classification of periodontal and peri-implant conditions as a first step towards precision periodontics. Since the driving forces of precision medicine are represented by biomarkers and machine learning algorithms, with the lack of periodontal markers validated for diagnostic use, the implementation of a precision medicine approach in periodontology remains in the very initial stage. This narrative review elaborates the unmet diagnostic needs in periodontal diagnostics, the concept of precision periodontics, periodontal biomarkers, and a roadmap toward the implementation of a precision medicine approach in periodontal practice.

2018 ◽  
Vol 15 (3) ◽  
pp. 286-293
Author(s):  
Jonathan C Hibbard ◽  
Jonathan S Friedstat ◽  
Sonia M Thomas ◽  
Renee E Edkins ◽  
C Scott Hultman ◽  
...  

Background/aims: Laser treatment of burns scars is considered by some providers to be standard of care. However, there is little evidence-based research as to the true benefit. A number of factors hinder evaluation of the benefit of laser treatment. These include significant heterogeneity in patient response and possible delayed effects from the laser treatment. Moreover, laser treatments are often provided sequentially using different types of equipment and settings, so there are effectively a large number of overall treatment options that need to be compared. We propose a trial capable of coping with these issues and that also attempts to take advantage of the heterogeneous response in order to estimate optimal treatment plans personalized to each individual patient. It will be the first large-scale randomized trial to compare the effectiveness of laser treatments for burns scars and, to our knowledge, the very first example of the utility of a Sequential Multiple Assignment Randomized Trial in plastic surgery. Methods: We propose using a Sequential Multiple Assignment Randomized Trial design to investigate the effect of various permutations of laser treatment on hypertrophic burn scars. We will compare and test hypotheses regarding laser treatment effects at a general population level. Simultaneously, we hope to use the data generated to discover possible beneficial personalized treatment plans, tailored to individual patient characteristics. Results: We show that the proposed trial has good power to detect laser treatment effect at the overall population level, despite comparing a large number of treatment combinations. The trial will simultaneously provide high-quality data appropriate for estimating precision-medicine treatment rules. We detail population-level comparisons of interest and corresponding sample size calculations. We provide simulations to suggest the power of the trial to detect laser effect and also the possible benefits of personalization of laser treatment to individual characteristics. Conclusion: We propose, to our knowledge, the first use of a Sequential Multiple Assignment Randomized Trial in surgery. The trial is rigorously designed so that it is reasonably straightforward to implement and powered to answer general overall questions of interest. The trial is also designed to provide data that are suitable for the estimation of beneficial precision-medicine treatment rules that depend both on individual patient characteristics and on-going real-time patient response to treatment.


Author(s):  
Quanxue Li ◽  
Wentao Dai ◽  
Jixiang Liu ◽  
Yi-Xue Li ◽  
Yuan-Yuan Li

Abstract The implementation of cancer precision medicine requires biomarkers or signatures for predicting prognosis and therapeutic benefits. Most of current efforts in this field are paying much more attention to predictive accuracy than to molecular mechanistic interpretability. Mechanism-driven strategy has recently emerged, aiming to build signatures with both predictive power and explanatory power. Driven by this strategy, we developed a robust gene dysregulation analysis framework with machine learning algorithms, which is capable of exploring gene dysregulations underlying carcinogenesis from high-dimensional data with cooperativity and synergy between regulators and several other transcriptional regulation rules taken into consideration. We then applied the framework to a colorectal cancer (CRC) cohort from TCGA. The identified CRC-related dysregulations significantly covered known carcinogenic processes and exhibited good prognostic effect. By choosing dysregulations with greedy strategy, we built a four-dysregulation signature (4-DysReg), which has the capability of predicting prognosis and adjuvant chemotherapy benefit. 4-DysReg has the potential to explain carcinogenesis in terms of dysfunctional transcriptional regulation. These results demonstrate that our gene dysregulation analysis framework could be used to develop predictive signature with mechanistic interpretability for cancer precision medicine, and furthermore, elucidate the mechanisms of carcinogenesis.


2019 ◽  
Vol 5 (3) ◽  
pp. 205630511986765
Author(s):  
Supraja Gurajala ◽  
Suresh Dhaniyala ◽  
Jeanna N. Matthews

Poor air quality is recognized as a major risk factor for human health globally. Critical to addressing this important public-health issue is the effective dissemination of air quality data, information about adverse health effects, and the necessary mitigation measures. However, recent studies have shown that even when public get data on air quality and understand its importance, people do not necessarily take actions to protect their health or exhibit pro-environmental behaviors to address the problem. Most existing studies on public attitude and response to air quality are based on offline studies, with a limited number of survey participants and over a limited number of geographical locations. For a larger survey size and a wider set of locations, we collected Twitter data for a period of nearly 2 years and analyzed these data for three major cities: Paris, London, and New Delhi. We identify the three hashtags in each city that best correlate the frequency of tweets with local air quality. Using tweets with these hashtags, we determined that people’s response to air quality across all three cities was nearly identical when considering relative changes in air pollution. Using machine-learning algorithms, we determined that health concerns dominated public response when air quality degraded, with the strongest increase in concern being in New Delhi, where pollution levels are the highest among the three cities studied. The public call for political solutions when air quality worsens is consistent with similar findings with offline surveys in other cities. We also conducted an unsupervised learning analysis to extract topics from tweets in Delhi and studied their evolution over time and with changing air quality. Our analysis helped extract relevant words or features associated with different air quality–related topics such as air pollution policy and health. Also, the topic modeling analysis revealed niche topics associated with sporadic air quality events, such as fireworks during festivals and the air quality impact on an outdoor sport event. Our approach shows that a tweet-based analysis can enable social scientists to probe and survey public response to events such as air quality in a timely fashion and help policy makers respond appropriately.


2019 ◽  
Vol 24 (34) ◽  
pp. 3998-4006
Author(s):  
Shijie Fan ◽  
Yu Chen ◽  
Cheng Luo ◽  
Fanwang Meng

Background: On a tide of big data, machine learning is coming to its day. Referring to huge amounts of epigenetic data coming from biological experiments and clinic, machine learning can help in detecting epigenetic features in genome, finding correlations between phenotypes and modifications in histone or genes, accelerating the screen of lead compounds targeting epigenetics diseases and many other aspects around the study on epigenetics, which consequently realizes the hope of precision medicine. Methods: In this minireview, we will focus on reviewing the fundamentals and applications of machine learning methods which are regularly used in epigenetics filed and explain their features. Their advantages and disadvantages will also be discussed. Results: Machine learning algorithms have accelerated studies in precision medicine targeting epigenetics diseases. Conclusion: In order to make full use of machine learning algorithms, one should get familiar with the pros and cons of them, which will benefit from big data by choosing the most suitable method(s).


2012 ◽  
Vol 63 (1) ◽  
pp. 13-32 ◽  
Author(s):  
Roberta Prokešová ◽  
Dušan Plašienka ◽  
Rastislav Milovský

Structural pattern and emplacement mechanisms of the Krížna cover nappe (Central Western Carpathians)The Central Western Carpathians are characterized by both the thick- and thin-skinned thrust tectonics that originated during the Cretaceous. The Krížna Unit (Fatric Superunit) with a thickness of only a few km is the most widespread cover nappe system that completely overthrusts the Tatric basement/cover superunit over an area of about 12 thousands square km. In searching for a reliable model of its origin and emplacement, we have collected structural data throughout the nappe body from its hinterland backstop (Veporic Superunit) to its frontal parts. Fluid inclusion (FI) data from carbonate cataclastic rocks occurring at the nappe sole provided useful information about the p-T conditions during the nappe transport. The crucial phenomena considered for formulation of our evolutionary model are: (1) the nappe was derived from a broad rifted basinal area bounded by elevated domains; (2) the nappe body is composed of alternating, rheologically very variable sedimentary rock complexes, hence creating a mechanically stratified multilayer; (3) presence of soft strata serving as décollement horizons; (4) stress and strain gradients increasing towards the backstop; (5) progressive internal deformation at very low-grade conditions partitioned into several deformation stages reflecting varying external constraints for the nappe movement; (6) a very weak nappe sole formed by cataclasites indicating fluid-assisted nappe transport during all stages; (7) injection of hot overpressured fluids from external sources (deformed basement units) facilitating frontal ramp overthrusting under supralithostatic conditions. It was found that no simple mechanical model can be applied, but that all known principal emplacement mechanisms and driving forces temporarily participated in progressive structural evolution of the nappe. The rear compression operated during the early stages, when the sedimentary succession was detached, shortened and transported over the frontal ramp. Subsequently, gravity spreading and gliding governed the final nappe emplacement over the unconstrained basinal foreland.


2011 ◽  
Vol 2011 ◽  
pp. 1-31 ◽  
Author(s):  
Maira Ladeia R. Curti ◽  
Patrícia Jacob ◽  
Maria Carolina Borges ◽  
Marcelo Macedo Rogero ◽  
Sandra Roberta G. Ferreira

Obesity is currently considered a serious public health issue due to its strong impact on health, economy, and quality of life. It is considered a chronic low-grade inflammation state and is directly involved in the genesis of metabolic disturbances, such as insulin resistance and dyslipidemia, which are well-known risk factors for cardiovascular disease. Furthermore, there is evidence that genetic variation that predisposes to inflammation and metabolic disturbances could interact with environmental factors, such as diet, modulating individual susceptibility to developing these conditions. This paper aims to review the possible interactions between diet and single-nucleotide polymorphisms (SNPs) in genes implicated on the inflammatory response, lipoprotein metabolism, and oxidative status. Therefore, the impact of genetic variants of the peroxisome proliferator-activated receptor-(PPAR-)gamma, tumor necrosis factor-(TNF-)alpha, interleukin (IL)-1, IL-6, apolipoprotein (Apo) A1, Apo A2, Apo A5, Apo E, glutathione peroxidases 1, 2, and 4, and selenoprotein P exposed to variations on diet composition is described.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Bethany M. Barnes ◽  
Louisa Nelson ◽  
Anthony Tighe ◽  
George J. Burghel ◽  
I-Hsuan Lin ◽  
...  

Abstract Background Epithelial ovarian cancer (OC) is a heterogenous disease consisting of five major histologically distinct subtypes: high-grade serous (HGSOC), low-grade serous (LGSOC), endometrioid (ENOC), clear cell (CCOC) and mucinous (MOC). Although HGSOC is the most prevalent subtype, representing 70–80% of cases, a 2013 landmark study by Domcke et al. found that the most frequently used OC cell lines are not molecularly representative of this subtype. This raises the question, if not HGSOC, from which subtype do these cell lines derive? Indeed, non-HGSOC subtypes often respond poorly to chemotherapy; therefore, representative models are imperative for developing new targeted therapeutics. Methods Non-negative matrix factorisation (NMF) was applied to transcriptomic data from 44 OC cell lines in the Cancer Cell Line Encyclopedia, assessing the quality of clustering into 2–10 groups. Epithelial OC subtypes were assigned to cell lines optimally clustered into five transcriptionally distinct classes, confirmed by integration with subtype-specific mutations. A transcriptional subtype classifier was then developed by trialling three machine learning algorithms using subtype-specific metagenes defined by NMF. The ability of classifiers to predict subtype was tested using RNA sequencing of a living biobank of patient-derived OC models. Results Application of NMF optimally clustered the 44 cell lines into five transcriptionally distinct groups. Close inspection of orthogonal datasets revealed this five-cluster delineation corresponds to the five major OC subtypes. This NMF-based classification validates the Domcke et al. analysis, in identifying lines most representative of HGSOC, and additionally identifies models representing the four other subtypes. However, NMF of the cell lines into two clusters did not align with the dualistic model of OC and suggests this classification is an oversimplification. Subtype designation of patient-derived models by a random forest transcriptional classifier aligned with prior diagnosis in 76% of unambiguous cases. In cases where there was disagreement, this often indicated potential alternative diagnosis, supported by a review of histological, molecular and clinical features. Conclusions This robust classification informs the selection of the most appropriate models for all five histotypes. Following further refinement on larger training cohorts, the transcriptional classification may represent a useful tool to support the classification of new model systems of OC subtypes.


2019 ◽  
Author(s):  
Roberto Boto ◽  
Francesca Peccati ◽  
Rubén Laplaza ◽  
chaoyu quan ◽  
Alessandra Carbone ◽  
...  

<br>The quantification of noncovalent interactions in big systems is of crucial importance for understanding the structure and function of biosystems. The NCI method [J. Am. Chem. Soc. 132 , 6498 (2010)] enables to identify attractive and repulsive noncovalent interactions from promolecular densities in a fast manner. However, the approach remained up to now visual/qualitative, the relationship with energetics was conspicuously missing. We present a new version of NCIPLOT which allows quantifying the properties of the NonCovalent Interaction (NCI) regions in a fast manner. In order to do so, the definition of NCI volumes is introduced, which allows quantification of intra and intermolecular NCI properties in big systems where wavefunctions are not available. The connection between these integrals and energetics is reviewed for benchmark systems (S66 8), showing that our simple approach can lead to GGAquality energies while scaling with the number of atoms involved in the interaction (not the total number of atoms). The new implementation also includes an adaptive grid which allows the computation in a fast, parallelizable and efficient computational environment. The relationship with energetics derived from force fields is highlighted<br>and the faster algorithm exploited to analyze the evolution of interactions along MD trajectories. Through machine learning algorithms we characterize the relevance of NCI integrals in understanding the energetics of big systems, which is then applied in revealing the energetic changes along conformational changes, as well as identifying the atoms involved. This simple approach enables to identify the driving forces in biomolecular structural changes both at the spatial and energetic levels, while going beyond a mere parametrized-distances analysis.<br>


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Li Zhang ◽  
Xia Zhe ◽  
Min Tang ◽  
Jing Zhang ◽  
Jialiang Ren ◽  
...  

Purpose. This study aimed to investigate the value of biparametric magnetic resonance imaging (bp-MRI)-based radiomics signatures for the preoperative prediction of prostate cancer (PCa) grade compared with visual assessments by radiologists based on the Prostate Imaging Reporting and Data System Version 2.1 (PI-RADS V2.1) scores of multiparametric MRI (mp-MRI). Methods. This retrospective study included 142 consecutive patients with histologically confirmed PCa who were undergoing mp-MRI before surgery. MRI images were scored and evaluated by two independent radiologists using PI-RADS V2.1. The radiomics workflow was divided into five steps: (a) image selection and segmentation, (b) feature extraction, (c) feature selection, (d) model establishment, and (e) model evaluation. Three machine learning algorithms (random forest tree (RF), logistic regression, and support vector machine (SVM)) were constructed to differentiate high-grade from low-grade PCa. Receiver operating characteristic (ROC) analysis was used to compare the machine learning-based analysis of bp-MRI radiomics models with PI-RADS V2.1. Results. In all, 8 stable radiomics features out of 804 extracted features based on T2-weighted imaging (T2WI) and ADC sequences were selected. Radiomics signatures successfully categorized high-grade and low-grade PCa cases ( P < 0.05 ) in both the training and test datasets. The radiomics model-based RF method (area under the curve, AUC: 0.982; 0.918), logistic regression (AUC: 0.886; 0.886), and SVM (AUC: 0.943; 0.913) in both the training and test cohorts had better diagnostic performance than PI-RADS V2.1 (AUC: 0.767; 0.813) when predicting PCa grade. Conclusions. The results of this clinical study indicate that machine learning-based analysis of bp-MRI radiomic models may be helpful for distinguishing high-grade and low-grade PCa that outperformed the PI-RADS V2.1 scores based on mp-MRI. The machine learning algorithm RF model was slightly better.


2020 ◽  
Author(s):  
Peer Nowack ◽  
Lev Konstantinovskiy ◽  
Hannah Gardiner ◽  
John Cant

Abstract. Air pollution is a key public health issue in urban areas worldwide. The development of low-cost air pollution sensors is consequently a major research priority. However, low-cost sensors often fail to attain sufficient measurement performance compared to state-of-the-art measurement stations, and typically require calibration procedures in expensive laboratory settings. As a result, there has been much debate about calibration techniques that could make their performance more reliable, while also developing calibration procedures that can be carried out without access to advanced laboratories. One repeatedly proposed strategy is low-cost sensor calibration through co-location with public measurement stations. The idea is that, using a regression function, the low-cost sensor signals can be calibrated against the station reference signal, to be then deployed separately with performances similar to the original stations. Here we test the idea of using machine learning algorithms for such regression tasks using hourly-averaged co-location data for nitrogen dioxide (NO2) and particulate matter of particle sizes smaller than 10 μm (PM10) at three different locations in the urban area of London, UK. Specifically, we compare the performance of Ridge regression, a linear statistical learning algorithm, to two non-linear algorithms in the form of Random Forest (RF) regression and Gaussian Process regression (GPR). We further benchmark the performance of all three machine learning methods to the more common Multiple Linear Regression (MLR). We obtain very good out-of-sample R2-scores (coefficient of determination) > 0.7, frequently exceeding 0.8, for the machine learning calibrated low-cost sensors. In contrast, the performance of MLR is more dependent on random variations in the sensor hardware and co-located signals, and is also more sensitive to the length of the co-location period. We find that, subject to certain conditions, GPR is typically the best performing method in our calibration setting, followed by Ridge regression and RF regression. However, we also highlight several key limitations of the machine learning methods, which will be crucial to consider in any co-location calibration. In particular, none of the methods is able to extrapolate to pollution levels well outside those encountered at training stage. Ultimately, this is one of the key limiting factors when sensors are deployed away from the co-location site itself. Consequently, we find that the linear Ridge method, which best mitigates such extrapolation effects, is typically performing as good as, or even better, than GPR after sensor re-location. Overall, our results highlight the potential of co-location methods paired with machine learning calibration techniques to reduce costs of air pollution measurements, subject to careful consideration of the co-location training conditions, the choice of calibration variables, and the features of the calibration algorithm.


Sign in / Sign up

Export Citation Format

Share Document