The Role of Computational Tools in Biomechanics

Author(s):  
D.C. Simkins ◽  
J.B. Alford
Keyword(s):  
Toxics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 41 ◽  
Author(s):  
Jingchuan Xue ◽  
Yunjia Lai ◽  
Chih-Wei Liu ◽  
Hongyu Ru

The proposal of the “exposome” concept represents a shift of the research paradigm in studying exposure-disease relationships from an isolated and partial way to a systematic and agnostic approach. Nevertheless, exposome implementation is facing a variety of challenges including measurement techniques and data analysis. Here we focus on the chemical exposome, which refers to the mixtures of chemical pollutants people are exposed to from embryo onwards. We review the current chemical exposome measurement approaches with a focus on those based on the mass spectrometry. We further explore the strategies in implementing the concept of chemical exposome and discuss the available chemical exposome studies. Early progresses in the chemical exposome research are outlined, and major challenges are highlighted. In conclusion, efforts towards chemical exposome have only uncovered the tip of the iceberg, and further advancement in measurement techniques, computational tools, high-throughput data analysis, and standardization may allow more exciting discoveries concerning the role of exposome in human health and disease.


2020 ◽  
Vol 2 (1-2) ◽  
pp. 181-191 ◽  
Author(s):  
Giancarlo Guizzardi

According to the FAIR guiding principles, one of the central attributes for maximizing the added value of information artifacts is interoperability. In this paper, I discuss the importance, and propose a characterization of the notion of Semantic Interoperability. Moreover, I show that a direct consequence of this view is that Semantic Interoperability cannot be achieved without the support of, on one hand, (i) ontologies, as meaning contracts capturing the conceptualizations represented in information artifacts and, on the other hand, of (ii) Ontology, as a discipline proposing formal meth- ods and theories for clarifying these conceptualizations and articulating their representations. In particular, I discuss the fundamental role of formal ontological theories (in the latter sense) to properly ground the construction of representation languages, as well as methodological and computational tools for supporting the engineering of ontologies (in the former sense) in the context of FAIR.


Author(s):  
Xiaofei Lu ◽  
J. Elliott Casal ◽  
Yingying Liu

This paper outlines the research agenda of a framework that integrates corpus- and genre-based approaches to academic writing research and pedagogy. This framework posits two primary goals of academic writing pedagogy, that is, to help novice writers develop knowledge of the rhetorical functions characteristic of academic discourse and become proficient in making appropriate linguistic choices to materialize such functions. To these ends, research in this framework involves 1) compilation of corpora of academic writing annotated for rhetorical functions, 2) analysis of the organization and distribution of such functions, 3) analysis of the linguistic features associated with different functions, 4) development of computational tools to automate functional annotation, 5) use of the annotated corpora in academic writing pedagogy, and 6) exploration of the role of form-function mappings in academic writing assessment. The implications of this framework for promoting consistent attention to form-function mappings in academic writing research, pedagogy, and assessment are discussed.


2017 ◽  
Vol 20 (4) ◽  
pp. 1181-1192 ◽  
Author(s):  
Lionel Morgado ◽  
Frank Johannes

Abstract Small RNAs (sRNAs) are important short-length molecules with regulatory functions essential for plant development and plasticity. High-throughput sequencing of total sRNA populations has revealed that the largest share of sRNA remains uncategorized. To better understand the role of sRNA-mediated cellular regulation, it is necessary to create accurate and comprehensive catalogues of sRNA and their sequence features, a task that currently relies on nontrivial bioinformatic approaches. Although a large number of computational tools have been developed to predict features of sRNA sequences, these tools are mostly dedicated to microRNAs and none integrates the functionalities necessary to describe units from all sRNA pathways thus far discovered in plants. Here, we review the different classes of sRNA found in plants and describe available bioinformatics tools that can help in their detection and categorization.


1999 ◽  
Vol 26 (1) ◽  
pp. 217-260
Author(s):  
FERNAND GOBET

In his review, Rispoli's main concern is that Elman et al.'s book will aggravate the degree of polarization in developmental psycholinguistics. I cannot really comment on this worry, as developmental psycholinguistics is not my field. Instead, I will discuss some questions more related to my background – the role of computational modelling in Elman et al.'s approach.Elman et al.'s ambitious goal is to propose theories of cognitive development that are grounded in our knowledge of biology. This is of course what the great Jean Piaget tried to achieve during his lifetime – unsuccessfully, as we know. Elman et al.'s advantage over Piaget is that they have a set of computational tools, connectionism, which both allows them to specify theories precisely and to study complex behaviours (such as epigenesis, where innate and environmental factors interact to create new levels of complexity) that are just beyond the (unaided) human mind. Even though I will highlight some of the weaknesses of their approach below, I should emphasize that reading their book was an exciting and enjoyable experience.As noted by Rispoli, there are important problems with the simulations reported by Elman et al. Rispoli focuses on simulations of past tense acquisition and syntax acquisition, but the problems are by no means limited to these areas. I will briefly consider two recent developments in neural net research, one taken from the field of language acquisition, and one from elsewhere, which underscore some of the difficulties of the simulations discussed in the book.


2020 ◽  
Vol 8 (1) ◽  
pp. 87-92
Author(s):  
Dries Daems

OverviewWith an ever-growing range of computational tools and applications now available for archaeological practice, the potential of digital archaeology is greater than ever before. Yet, archaeological curricula have not always followed suit, and many archaeologists are not up-to-date with the necessary digital skills. To fill this gap, online tutorials and learning platforms are being developed to familiarize archaeologists and students with the potential of digital media for archaeological research practices. Given the essential pedagogical role of these platforms, their quality is deserving of deeper interrogation. Here, I review three major platforms offering tutorials on digital archaeology: the Programming Historian, Project MERCURY-SIMREC, and the Open Digital Archaeology Textbook. These are evaluated and compared based on their goals, design (intuitiveness, ease of use), accessibility (use of jargon, required prerequisite knowledge, software requirements), scope (target audience, range of skills addressed, targeted level of improvement), and efficiency (whether or not they achieve their intended goals). The review concludes with a road map contextualizing the current state of available resources in light of the wider state of digital archaeology, and it considers pathways toward future development.


2020 ◽  
Vol 13 (4) ◽  
Author(s):  
Saumya Das ◽  
Ravi Shah ◽  
Stefanie Dimmeler ◽  
Jane E. Freedman ◽  
Christopher Holley ◽  
...  

Background: The discovery that much of the non–protein-coding genome is transcribed and plays a diverse functional role in fundamental cellular processes has led to an explosion in the development of tools and technologies to investigate the role of these noncoding RNAs in cardiovascular health. Furthermore, identifying noncoding RNAs for targeted therapeutics to treat cardiovascular disease is an emerging area of research. The purpose of this statement is to review existing literature, offer guidance on tools and technologies currently available to study noncoding RNAs, and identify areas of unmet need. Methods: The writing group used systematic literature reviews (including MEDLINE, Web of Science through 2018), expert opinion/statements, analyses of databases and computational tools/algorithms, and review of current clinical trials to provide a broad consensus on the current state of the art in noncoding RNA in cardiovascular disease. Results: Significant progress has been made since the initial studies focusing on the role of miRNAs (microRNAs) in cardiovascular development and disease. Notably, recent progress on understanding the role of novel types of noncoding small RNAs such as snoRNAs (small nucleolar RNAs), tRNA (transfer RNA) fragments, and Y-RNAs in cellular processes has revealed a noncanonical function for many of these molecules. Similarly, the identification of long noncoding RNAs that appear to play an important role in cardiovascular disease processes, coupled with the development of tools to characterize their interacting partners, has led to significant mechanistic insight. Finally, recent work has characterized the unique role of extracellular RNAs in mediating intercellular communication and their potential role as biomarkers. Conclusions: The rapid expansion of tools and pipelines for isolating, measuring, and annotating these entities suggests that caution in interpreting results is warranted until these methodologies are rigorously validated. Most investigators have focused on investigating the functional role of single RNA entities, but studies suggest complex interaction between different RNA molecules. The use of network approaches and advanced computational tools to understand the interaction of different noncoding RNA species to mediate a particular phenotype may be required to fully comprehend the function of noncoding RNAs in mediating disease phenotypes.


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1201
Author(s):  
José A. Tenreiro Machado ◽  
António M. Lopes ◽  
Maria Eugénia Mata

War is a cause of gains and losses. Economic historians have long stressed the extreme importance of considering the economic potential of society for belligerency, the role of management of chaos to bear the costs of battle and casualties, and ingenious and improvisation methodologies for emergency management. However, global and inter-temporal studies on warring are missing. The adoption of computational tools for data processing is a key modeling option with present day resources. In this paper, hierarchical clustering techniques and multidimensional scaling are used as efficient instruments for visualizing and describing military conflicts by electing different metrics to assess their characterizing features: time, time span, number of belligerents, and number of casualties. Moreover, entropy is adopted for measuring war complexity over time. Although wars have been an important topic of analysis in all ages, they have been ignored as a subject of nonlinear dynamics and complex system analysis. This paper seeks to fill these gaps in the literature by proposing a quantitative perspective based on algorithmic strategies. We verify the growing number of events and an explosion in their characteristics. The results have similarities to those exhibited by systems with increasing volatility, or evolving toward chaotic-like behavior. We can question also whether such dynamics follow the second law of thermodynamics since the adopted techniques reflect a system expanding the entropy.


Author(s):  
Briana Lucero ◽  
Peter Ngo ◽  
Julie Linsey ◽  
Cameron J. Turner

Computational tools for aiding design-by-analogy have so far focused on function- and keyword-based retrieval of analogues. Given the critical role of performance and benchmarking in design, there is a need for performance metrics-driven analogy retrieval that is currently unmet. Towards meeting this need, a study has been done to investigate and propose frameworks for organizing the myriad technical performance metrics in engineering design, such as measures of efficiency. Such organizational frameworks are needed for the implementation of a computational tool which can retrieve relevant analogies using performance metrics. The study, which takes a deductive approach, defines a hierarchical taxonomy of performance metrics akin to the functional basis vocabulary of function and flow terms. Its derivation follows from bond graphs, control theory, and Design for X guidelines.


Author(s):  
Mara Capone ◽  
Emanuela Lanzara ◽  
Francesco Paolo Antonio Portioli ◽  
Francesco Flore

AbstractStarting from funicular models, chain models and hanging membranes, the role of 3D physical models in optimized shape research is the basis of form-finding strategies. Advances in structural optimized shape design derive from the wide spread of special digital form-finding tools. The goal of this paper is to test and evaluate interdisciplinary approaches based on computational tools useful in the form finding of efficient structural systems. This work is aimed at designing an inverse hanging shape subdivided into polygonal voussoirs (Voronoi patterns) by relaxing a planar discrete and elastic system, loaded at each point and anchored along its boundary. The workflow involves shaping, discretization (from pre-shaped paneling to digital stereotomy) and structural analysis carried out using two modeling approaches, finite element and rigid block modeling, using an in-house software tool, LiABlock_3D (MATLAB®), to check the stress state and to evaluate the equilibrium stability of the final shell.


Sign in / Sign up

Export Citation Format

Share Document