Sarcopenia: Beyond Muscle Atrophy and into the New Frontiers of Opportunistic Imaging, Precision Medicine, and Machine Learning

2018 ◽  
Vol 22 (03) ◽  
pp. 307-322 ◽  
Author(s):  
Leon Lenchik ◽  
Robert Boutin

AbstractAs populations continue to age worldwide, the impact of sarcopenia on public health will continue to grow. The clinically relevant and increasingly common diagnosis of sarcopenia is at the confluence of three tectonic shifts in medicine: opportunistic imaging, precision medicine, and machine learning. This review focuses on the state-of-the-art imaging of sarcopenia and provides context for such imaging by discussing the epidemiology, pathophysiology, consequences, and future directions in the field of sarcopenia.

2019 ◽  
Vol 53 ◽  
pp. 6 ◽  
Author(s):  
Kelly Polido Kaneshiro Olympio ◽  
Fernanda Junqueira Salles ◽  
Ana Paula Sacone da Silva Ferreira ◽  
Elizeu Chiodi Pereira ◽  
Allan Santos de Oliveira ◽  
...  

Considering the innovative nature of the approach to human exposome, we present the state of the art of studies on exposome, and discuss current challenges and perspectives in this area. Several reading and discussion activities were conducted by the Expossoma e Saúde do Trabalhador (eXsat – Group Exposome and Worker’s Health), with systematization of the literature in the area published between January 2005 and January 2017, available in the databases PubMed and Web of Science. This comment brings a thematic analysis to encourage the dissemination of the exposome approach for studies in the Public Health area.


Author(s):  
Ravinder Kumar

This article presents a critical review of extensive research on automatic fingerprint matching over a decade. In particular, the focus is made on the non-minutiae-based features and machine-learning-based fingerprint matching approaches. This article highlights the problems pertaining to the minutiae-based features and presents a detailed review on the state-of-the-art of non-minutiae-based features. This article also presents an overview of the state-of-the-art fingerprint benchmark databases, along with the open problems and the future directions for the fingerprint matching.


2018 ◽  
Vol 19 (1) ◽  
pp. 10-33 ◽  
Author(s):  
Marco Bisogno ◽  
John Dumay ◽  
Francesca Manes Rossi ◽  
Paolo Tartaglia Polcini

Purpose It is important to have a literature review to open any special issue as a way of introducing the state-of-the-art topics and link past research with the papers appearing in this special issue on IC in education. The paper aims to discuss this issue. Design/methodology/approach This research uses the structured literature to investigate the state-of-the-art and future directions of IC literature in education. In total, 47 articles are explored including nine from this special issue. Findings IC in education research is concentrated in Europe and mainly addresses IC in universities. Additionally, current IC research is progressing by examining IC practices inside universities using a third-stage IC approach, with new research also concentrating on third-mission outcomes, thus there is scope to continue IC and education research beyond universities. IC in education can also expand into fifth stage IC research, which abandons the boundaries of the educational institution and concentrate on the impact of IC and education on multiple stakeholders. Research limitations/implications Current IC in education research is too narrow and mainly investigates IC in European contexts using case study methodology. However, there is ample scope to widen research that develops new frameworks in different educational and country contexts using a wider range of research methodologies. IC in education needs to expand its boundaries so it does not lose its relevance, and thus be able to contribute to wider policy debates. Originality/value This paper presents the current state-of-the-art structured literature review of the articles investigating IC in education.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Author(s):  
Kai Guo ◽  
Zhenze Yang ◽  
Chi-Hua Yu ◽  
Markus J. Buehler

This review revisits the state of the art of research efforts on the design of mechanical materials using machine learning.


2020 ◽  
Author(s):  
Fei Qi ◽  
Zhaohui Xia ◽  
Gaoyang Tang ◽  
Hang Yang ◽  
Yu Song ◽  
...  

As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.


2021 ◽  
Vol 56 (4) ◽  
pp. 105-118
Author(s):  
Žilvinas Švedkauskas ◽  
Ahmed Maati

An emerging literature has shown concerns about the impact of the pandemic on the proliferation of digital surveillance. Contributing to these debates, in this paper we demonstrate how the pandemic facilitates digital surveillance in three ways: (1) By shifting everyday communication to digital means it contributes to the generation of extensive amounts of data susceptible to surveillance. (2) It motivates the development of new digital surveillance tools. (3) The pandemic serves as a perfect justification for governments to prolong digital surveillance. We provide empirical anecdotes for these three effects by examining reports by the Global Digital Policy Incubator at Stanford University. Building on our argument, we conclude that we might be on the verge of a dangerous normalization of digital surveillance. Thus, we call on scholars to consider the full effects of public health crises on politics and suggest scrutinizing sources of digital data and the complex relationships between the state, corporate actors, and the sub-contractors behind digital surveillance.


2021 ◽  
Vol 11 (17) ◽  
pp. 8074
Author(s):  
Tierui Zou ◽  
Nader Aljohani ◽  
Keerthiraj Nagaraj ◽  
Sheng Zou ◽  
Cody Ruben ◽  
...  

Concerning power systems, real-time monitoring of cyber–physical security, false data injection attacks on wide-area measurements are of major concern. However, the database of the network parameters is just as crucial to the state estimation process. Maintaining the accuracy of the system model is the other part of the equation, since almost all applications in power systems heavily depend on the state estimator outputs. While much effort has been given to measurements of false data injection attacks, seldom reported work is found on the broad theme of false data injection on the database of network parameters. State-of-the-art physics-based model solutions correct false data injection on network parameter database considering only available wide-area measurements. In addition, deterministic models are used for correction. In this paper, an overdetermined physics-based parameter false data injection correction model is presented. The overdetermined model uses a parameter database correction Jacobian matrix and a Taylor series expansion approximation. The method further applies the concept of synthetic measurements, which refers to measurements that do not exist in the real-life system. A machine learning linear regression-based model for measurement prediction is integrated in the framework through deriving weights for synthetic measurements creation. Validation of the presented model is performed on the IEEE 118-bus system. Numerical results show that the approximation error is lower than the state-of-the-art, while providing robustness to the correction process. Easy-to-implement model on the classical weighted-least-squares solution, highlights real-life implementation potential aspects.


Sign in / Sign up

Export Citation Format

Share Document