scholarly journals Biomedical Microtechnologies beyond Scholarly Impact

Micromachines ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1471
Author(s):  
Maria Vomero ◽  
Giuseppe Schiavone

The recent tremendous advances in medical technology at the level of academic research have set high expectations for the clinical outcomes they promise to deliver. To the demise of patient hopes, however, the more disruptive and invasive a new technology is, the bigger the gap is separating the conceptualization of a medical device and its adoption into healthcare systems. When technology breakthroughs are reported in the biomedical scientific literature, news focus typically lies on medical implications rather than engineering progress, as the former are of higher appeal to a general readership. While successful therapy and diagnostics are indeed the ultimate goals, it is of equal importance to expose the engineering thinking needed to achieve such results and, critically, identify the challenges that still lie ahead. Here, we would like to provoke thoughts on the following questions, with particular focus on microfabricated medical devices: should research advancing the maturity and reliability of medical technology benefit from higher accessibility and visibility? How can the scientific community encourage and reward academic work on the overshadowed engineering aspects that will facilitate the evolution of laboratory samples into clinical devices?

Author(s):  
ROTHKÖTTER Stefanie ◽  
Craig C. GARNER ◽  
Sándor VAJNA

In light of a growing research interest in the innovation potential that lies at the inter­section of design, technology, and science, this paper offers a literature review of design initiatives centered on scientific discovery and invention. The focus of this paper is on evidence of design capabilities in the academic research environment. The results are structured along the Four Orders of Design, with examples of design-in-science initiatives ranging from (1) the design of scientific figures and (2) laboratory devices using new technology to (3) interactions in design workshops for scientists and (4) inter­disciplinary design labs. While design capabilities have appeared in all four orders of design, there are barriers and cultural constraints that have to be taken into account for working at or researching these creative intersections. Modes of design integration and potentially necessary adaptations of design practice are therefore also highlighted.


2012 ◽  
Vol 7 (3) ◽  
pp. 4 ◽  
Author(s):  
Meg Raven

Objective: This study sought to better understand the research expectations of first-year students upon beginning university study, and how these expectations differed from those of their professors. Most academic librarians observe that the research expectations of these two groups differ considerably and being able to articulate where these differences are greatest may help us provided more focused instruction, and allow us to work more effectively with professors and student support services. Methods: 317 first-year undergraduate students and 75 professors at Mount Saint Vincent University in Halifax, NS were surveyed to determine what they each expected of first-year student research. Students were surveyed on the first day of term so as to best understand their research expectations as they transitioned from high school to university. Results: The gulf between student and professor research expectations was found to be considerable, especially in areas such as time required for reading and research, and the resources necessary to do research. While students rated their preparedness for university as high, they also had high expectations related to their ability to use non-academic sources. Not unexpectedly, the majority of professors believed that students are not prepared to do university-level research, they do not take enough responsibility for their own learning, they should use more academic research sources, and read twice as much as students believe they should. Conclusions: By better understanding differing research expectations, students can be guided very early in their studies about appropriate academic research practices, and librarians and professors can provide students with improved research instruction. Strategies for working with students, professors and the university community are discussed.


2014 ◽  
Vol 2 (1) ◽  
pp. 46-56 ◽  
Author(s):  
Alexandr Sergeevich Iova ◽  
Irina Alexandrovna Krukova ◽  
Dmitriy Alexandrovich Iova

This article deals with the actual problem of present-day traumotology - improvement of rendering of medical care for patients with polytrauma. The new technology “Pansonoscopy” is presented, which is the minimally invasive and widely available method of fast imaging of the “whole body” of the patient in any medical situations. It permits to detect the most frequent and dangerous traumatic injuries (cranial, thoracal, abdominal, skeletal, etc.) applying portable ultrasound scanners in real-time mode. The guarantee of imaging of the intracranial injuries, pos sibility realization of ultrasound examination by clinician on his own, and possibility of online medical consultations to experts (sonologist) - are fundamently new. This technology is destined for the large sections of practitioners, what render medical care for patients with polytrauma.


2016 ◽  
Author(s):  
Lynn Zentner ◽  
Gerhard Klimeck

Established in 2002, nanoHUB.org continues to attract a large community of users for computational tools and learning materials related to nanotechnology [1, 2]. Over the last 12 months, nanoHUB has engaged over 1.4 million visitors and 13,000 simulation users with over 5,000 items of content, making it a premier example of an established science gateway. The nanoHUB team tracks references to nanoHUB in the scientific literature and have found nearly 1,600 vetted citations to nanoHUB, with over 19,000 secondary citations to the primary papers, supporting the concept that nanoHUB enables quality research. nanoHUB is also used extensively for both informal and formal education [3,4], with automatic algorithms detecting use in 1,501 classrooms reaching nearly 30,000 students. During 14 years of operation, the nanoHUB team has had an opportunity to study the behaviors of its user base, evaluate mechanisms for success, and learn when and how to make adjustments to better serve the community and stakeholders. We have developed a set of success criteria for a science gateway such as nanoHUB, for attracting and growing an active community of users. Outstanding science content is necessary and that content must continue to expand or the gateway and community will grow stagnant. A large challenge is to incentivize a community to not only use the site, but more importantly, to contribute [5,6]. There is often a recruitment and conversion process that involves, first, attracting users, giving them reason to stay, use, and share increasingly complex content, and then go on to become content authors themselves. This process requires a good understanding of the user community and its needs as well as an active outreach program, led by a user-oriented content steward with a technical background sufficient to understand the work and needs of the community. A reliable infrastructure is a critical key to maintaining an active, participatory community. Using underlying HUBzero® technology, nanoHUB is able to leverage infrastructure developments from across a wide variety of hubs, and by utilizing platform support from the HUBzero team, access development and operational expertise from a team of 25 professionals that one scientific project would be hard-pressed to support on its own. nanoHUB has found that open assessment and presentation of stats and impact metrics not only inform development and outreach activities but also incentivize users and provide transparency to the scientific community at large.


2021 ◽  
Author(s):  
Michael Christian Leitner ◽  
Frank Daumann ◽  
Florian Follert ◽  
Fabio Richlan

The phenomenon of home advantage (or home bias) is well-analyzed in the scientific literature and is traditionally an interdisciplinary topic. Current theorizing views the fans as a crucial factor influencing the outcome of a football (a.k.a. soccer) game, as the crowd influences the behavior of the players and officials involved in the game through social pressure. So far, the phenomenon has been difficult to study because, although there have always been single matches where the spectators were excluded, this never happened globally to all teams within a league or even across leagues. From an empirical perspective, the situation with COVID-19 governmental measures, especially the ban of fans from stadiums all over the world, can be interpreted as a “natural experiment” and analyzed accordingly. Thus, several studies examined the influence of supporters by comparing matches before the COVID-19 restrictions with so-called ghost games during the pandemic. To synthesize the existing knowledge after over a year of ghost games and to offer the scientific community and other stakeholders an overview regarding the numerous studies, we provide a systematic literature review that summarizes the main findings of empirical studies and discusses the results accordingly. Our findings - based on 16 studies - indicate that ghost games have a considerable impact on the phenomenon of home advantage. No study found an increased home advantage in ghost games. Rather, our results show that 13 (from 16 included) analyzed studies conclude – based on their individually analyzed data – a more or less significant decrease of home advantage in ghost games. We conclude that our findings are highly relevant from a both socio-economic and behavioral perspective and highlight the indirect and direct influence of spectators and fans on football. Our results have – besides for the scientific community – a high importance for sports and team managers, media executives, fan representatives and other responsible.


Author(s):  
Luc Schneider

This contribution tries to assess how the Web is changing the ways in which scientific knowledge is produced, distributed and evaluated, in particular how it is transforming the conventional conception of scientific authorship. After having properly introduced the notions of copyright, public domain and (e-)commons, I will critically assess James Boyle's (2003, 2008) thesis that copyright and scientific (e-) commons are antagonistic, but I will mostly agree with the related claim by Stevan Harnad (2001a,b, 2008) that copyright has become an obstacle to the accessibility of scientific works. I will even go further and argue that Open Access schemes not only solve the problem of the availability of scientific literature, but may also help to tackle the uncontrolled multiplication of scientific publications, since these publishing schemes are based on free public licenses allowing for (acknowledged) re-use of texts. However, the scientific community does not seem to be prepared yet to move towards an Open Source model of authorship, probably due to concerns related to attributing credit and responsability for the expressed hypotheses and results. Some strategies and tools that may encourage a change of academic mentality in favour of a conception of scientific authorship modelled on the Open Source paradigm are discussed.


2011 ◽  
pp. 1323-1331
Author(s):  
Jeffrey W. Seifert

A significant amount of attention appears to be focusing on how to better collect, analyze, and disseminate information. In doing so, technology is commonly and increasingly looked upon as both a tool, and, in some cases, a substitute, for human resources. One such technology that is playing a prominent role in homeland security initiatives is data mining. Similar to the concept of homeland security, while data mining is widely mentioned in a growing number of bills, laws, reports, and other policy documents, an agreed upon definition or conceptualization of data mining appears to be generally lacking within the policy community (Relyea, 2002). While data mining initiatives are usually purported to provide insightful, carefully constructed analysis, at various times data mining itself is alternatively described as a technology, a process, and/or a productivity tool. In other words, data mining, or factual data analysis, or predictive analytics, as it also is sometimes referred to, means different things to different people. Regardless of which definition one prefers, a common theme is the ability to collect and combine, virtually if not physically, multiple data sources, for the purposes of analyzing the actions of individuals. In other words, there is an implicit belief in the power of information, suggesting a continuing trend in the growth of “dataveillance,” or the monitoring and collection of the data trails left by a person’s activities (Clarke, 1988). More importantly, it is clear that there are high expectations for data mining, or factual data analysis, being an effective tool. Data mining is not a new technology but its use is growing significantly in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. While not completely without controversy, these types of data mining applications have gained greater acceptance. However, some national defense/homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed. Moreover, due to their security-related nature, the details of these initiatives (e.g., data sources, analytical techniques, access and retention practices, etc.) are usually less transparent.


Resources ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 6 ◽  
Author(s):  
Rúben Mendes ◽  
Teresa Fidélis ◽  
Peter Roebeling ◽  
Filipe Teles

The European Union quickly incorporated the concept of nature based-solutions (NBS), becoming a key promotor. This was achieved through financial support for both academic research and city implementations. Still, the processes of institutionalization are yet to be fully explored. This study aims at assessing how the scientific literature regarding NBS is addressing institutional aspects and how it is constructing the NBS narrative. This research is divided into two stages. First, it undertakes a quantitative analysis of the discourse, considering a set of preselected search terms organized into five categories: Actor, institutional, planning, policy, and regulation. Second, it adopts a qualitative analysis considering both a group of the most cited articles and of articles highlighted in the previous stage. The results indicate that the NBS concept is still shadowed by other environmental concepts such as ecosystem services. Despite being an issue promoted at the European level, the results of this exercise express the lack of concrete planning and policy recommendations, reflected by the absence of terms such as “planning objectives”. This pattern occurs in all other major categories, being the institutional category the least mentioned of all five categories. The results highlight the need to address both policies and planning recommendations more concretely, studying the institutional arrangements able to promote NBS.


Author(s):  
J. W. Seifert

A significant amount of attention appears to be focusing on how to better collect, analyze, and disseminate information. In doing so, technology is commonly and increasingly looked upon as both a tool, and, in some cases, a substitute, for human resources. One such technology that is playing a prominent role in homeland security initiatives is data mining. Similar to the concept of homeland security, while data mining is widely mentioned in a growing number of bills, laws, reports, and other policy documents, an agreed upon definition or conceptualization of data mining appears to be generally lacking within the policy community (Relyea, 2002). While data mining initiatives are usually purported to provide insightful, carefully constructed analysis, at various times data mining itself is alternatively described as a technology, a process, and/or a productivity tool. In other words, data mining, or factual data analysis, or predictive analytics, as it also is sometimes referred to, means different things to different people. Regardless of which definition one prefers, a common theme is the ability to collect and combine, virtually if not physically, multiple data sources, for the purposes of analyzing the actions of individuals. In other words, there is an implicit belief in the power of information, suggesting a continuing trend in the growth of “dataveillance,” or the monitoring and collection of the data trails left by a person’s activities (Clarke, 1988). More importantly, it is clear that there are high expectations for data mining, or factual data analysis, being an effective tool. Data mining is not a new technology but its use is growing significantly in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. While not completely without controversy, these types of data mining applications have gained greater acceptance. However, some national defense/homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed. Moreover, due to their security-related nature, the details of these initiatives (e.g., data sources, analytical techniques, access and retention practices, etc.) are usually less transparent.


Author(s):  
Anderson Rossanez ◽  
Julio Cesar dos Reis ◽  
Ricardo da Silva Torres ◽  
Hélène de Ribaupierre

Abstract Background Knowledge is often produced from data generated in scientific investigations. An ever-growing number of scientific studies in several domains result into a massive amount of data, from which obtaining new knowledge requires computational help. For example, Alzheimer’s Disease, a life-threatening degenerative disease that is not yet curable. As the scientific community strives to better understand it and find a cure, great amounts of data have been generated, and new knowledge can be produced. A proper representation of such knowledge brings great benefits to researchers, to the scientific community, and consequently, to society. Methods In this article, we study and evaluate a semi-automatic method that generates knowledge graphs (KGs) from biomedical texts in the scientific literature. Our solution explores natural language processing techniques with the aim of extracting and representing scientific literature knowledge encoded in KGs. Our method links entities and relations represented in KGs to concepts from existing biomedical ontologies available on the Web. We demonstrate the effectiveness of our method by generating KGs from unstructured texts obtained from a set of abstracts taken from scientific papers on the Alzheimer’s Disease. We involve physicians to compare our extracted triples from their manual extraction via their analysis of the abstracts. The evaluation further concerned a qualitative analysis by the physicians of the generated KGs with our software tool. Results The experimental results indicate the quality of the generated KGs. The proposed method extracts a great amount of triples, showing the effectiveness of our rule-based method employed in the identification of relations in texts. In addition, ontology links are successfully obtained, which demonstrates the effectiveness of the ontology linking method proposed in this investigation. Conclusions We demonstrate that our proposal is effective on building ontology-linked KGs representing the knowledge obtained from biomedical scientific texts. Such representation can add value to the research in various domains, enabling researchers to compare the occurrence of concepts from different studies. The KGs generated may pave the way to potential proposal of new theories based on data analysis to advance the state of the art in their research domains.


Sign in / Sign up

Export Citation Format

Share Document