scholarly journals Handrails through the Swamp? A Pilot to Test the Integration and Implementation Science Framework in Complex Real-World Research

2021 ◽  
Vol 13 (10) ◽  
pp. 5491
Author(s):  
Melissa Robson-Williams ◽  
Bruce Small ◽  
Roger Robson-Williams ◽  
Nick Kirk

The socio-environmental challenges the world faces are ‘swamps’: situations that are messy, complex, and uncertain. The aim of this paper is to help disciplinary scientists navigate these swamps. To achieve this, the paper evaluates an integrative framework designed for researching complex real-world problems, the Integration and Implementation Science (i2S) framework. As a pilot study, we examine seven inter and transdisciplinary agri-environmental case studies against the concepts presented in the i2S framework, and we hypothesise that considering concepts in the i2S framework during the planning and delivery of agri-environmental research will increase the usefulness of the research for next users. We found that for the types of complex, real-world research done in the case studies, increasing attention to the i2S dimensions correlated with increased usefulness for the end users. We conclude that using the i2S framework could provide handrails for researchers, to help them navigate the swamps when engaging with the complexity of socio-environmental problems.

1982 ◽  
Vol 26 (2) ◽  
pp. 203-203
Author(s):  
James A. Wise

This is a panel session focused on the applications of Human Factors to real world problems in architectural design. Five representatives from various design & research professions will present recent case studies of theirs, and examine the contribution that Human Factors made to these projects. The diversity of their examples shows the usefulness and importance on integrating concerns for the human user into plans for the built environment.


2021 ◽  
Author(s):  
Andreas Christ Sølvsten Jørgensen ◽  
Atiyo Ghosh ◽  
Marc Sturrock ◽  
Vahid Shahrezaei

AbstractThe modelling of many real-world problems relies on computationally heavy simulations. Since statistical inference rests on repeated simulations to sample the parameter space, the high computational expense of these simulations can become a stumbling block. In this paper, we compare two ways to mitigate this issue based on machine learning methods. One approach is to construct lightweight surrogate models to substitute the simulations used in inference. Alternatively, one might altogether circumnavigate the need for Bayesian sampling schemes and directly estimate the posterior distribution. We focus on stochastic simulations that track autonomous agents and present two case studies of real-world applications: tumour growths and the spread of infectious diseases. We demonstrate that good accuracy in inference can be achieved with a relatively small number of simulations, making our machine learning approaches orders of magnitude faster than classical simulation-based methods that rely on sampling the parameter space. However, we find that while some methods generally produce more robust results than others, no algorithm offers a one-size-fits-all solution when attempting to infer model parameters from observations. Instead, one must choose the inference technique with the specific real-world application in mind. The stochastic nature of the considered real-world phenomena poses an additional challenge that can become insurmountable for some approaches. Overall, we find machine learning approaches that create direct inference machines to be promising for real-world applications. We present our findings as general guidelines for modelling practitioners.Author summaryComputer simulations play a vital role in modern science as they are commonly used to compare theory with observations. One can thus infer the properties of a observed system by comparing the data to the predicted behaviour in different scenarios. Each of these scenarios corresponds to a simulation with slightly different settings. However, since real-world problems are highly complex, the simulations often require extensive computational resources, making direct comparisons with data challenging, if not insurmountable. It is, therefore, necessary to resort to inference methods that mitigate this issue, but it is not clear-cut what path to choose for any specific research problem. In this paper, we provide general guidelines for how to make this choice. We do so by studying examples from oncology and epidemiology and by taking advantage of developments in machine learning. More specifically, we focus on simulations that track the behaviour of autonomous agents, such as single cells or individuals. We show that the best way forward is problem-dependent and highlight the methods that yield the most robust results across the different case studies. We demonstrate that these methods are highly promising and produce reliable results in a small fraction of the time required by classic approaches that rely on comparisons between data and individual simulations. Rather than relying on a single inference technique, we recommend employing several methods and selecting the most reliable based on predetermined criteria.


Author(s):  
Devin Pierce ◽  
Shulan Lu ◽  
Derek Harter

The past decade has witnessed incredible advances in building highly realistic and richly detailed simulated worlds. We readily endorse the common-sense assumption that people will be better equipped for solving real-world problems if they are trained in near-life, even if virtual, scenarios. The past decade has also witnessed a significant increase in our knowledge of how the human body as both sensor and as effector relates to cognition. Evidence shows that our mental representations of the world are constrained by the bodily states present in our moment-to-moment interactions with the world. The current study investigated whether there are differences in how people enact actions in the simulated as opposed to the real world. The current study developed simple parallel task environments and asked participants to perform actions embedded in a stream of continuous events (e.g., cutting a cucumber). The results showed that participants performed actions at a faster speed and came closer to incurring injury to the fingers in the avatar enacting action environment than in the human enacting action environment.


2011 ◽  
Vol 1 (1) ◽  
pp. 75-84
Author(s):  
Wanty Widjaja

The notion of mathematical literacy advocated by PISA (OECD, 2006) offers a broader conception for assessing mathematical competences and processes with the main focus on the relevant use of mathematics in life. This notion of mathematical literacy is closely connected to the notion of mathematical modelling whereby mathematics is put to solving real world problems. Indonesia has participated as a partner country in PISA since 2000. The PISA trends in mathematics from 2003 to 2009 revealed unsatisfactory mathematical literacy among 15-year-old students from Indonesia who lagged behind the average of OECD countries. In this paper, exemplary cases will be discussed to examine and to promote mathematical literacy at teacher education level. Lesson ideas and instruments were adapted from PISA released items 2006. The potential of such tasks will be discussed based on case studies of implementing these instruments with samples of pre-service teachers in Yogyakarta.


Author(s):  
G.R. Gangadharan ◽  
Lorna Uden ◽  
Paul Oude Luttighuis

Software as a Service (SaaS) has become an important pragmatic in the world of enterprise software and business services markets. SaaS supports the concept of outsourcing where business processes are offered under a service level agreement for a given price. However, sourcing SaaS may not always involve outsourcing with respect to the transfer of internal activities and resources to external service providers. Users of SaaS need to know what strategies to use when determining sourcing requirements. In this paper, the authors develop a classification for sourcing SaaS based on Kraljic's matrix and a mapping of SaaS services to the sourcing structures. Further, they evaluate the proposed sourcing models against two real world case studies.


2020 ◽  
Vol 6 (3) ◽  
pp. 136-155
Author(s):  
Nikita Gupta ◽  
Nishant Bhardwaj ◽  
Gulam Muhammad Khan ◽  
Vivek Dave

Background: Computational fluid dynamics (CFD) came into existence with great success, thereby replacing the traditional methods used to simulate the problems related to the flow of fluid. First CFD utilitarian was introduced to the world in 1957, which was developed by a team at Los Alamos National Lab. For tremendous performance and to meet the expected results with ease for modern process conditions, engineers are now more inclined towards the use of simulation software rather than traditional methods. Hence, in the current scenario with the advancement of computer technologies, “CFD is recognized as an excellent tool for engineers to resolve real-world problems.” Introduction: CFD is defined as a branch of fluid dynamics which involves the use of numerical analysis and data structure to solve complications related to the flow of fluids (gasses or liquids). CFD is based on three major principles that are mass conservation, Newton's second law, and energy conservation. CFD has extended to a number of applications at an alarming rate in every field such as in aerospace, sports, food industry, engineering, hydraulics, HVAC (Heating, Ventilating, and Air conditioning), automotive, environmental, power generation, biomedical, pharmaceutical, and many more. Hence, a number of software like ANSYS, Open Foam, SimScale, Gerris, Auto desk simulation, Code_Saturne, etc, are beneficial in order to execute the operations, and to find the solution of realworld problems within a fraction of seconds. Methods: CFD analysis involves three major steps; pre-processing, solution, and post-processing. Preprocessing deals with defining model goals, identification of domain, designing, and creating the grid. Solution involves setting up the numerical model, computing, and monitoring the solution; whereas, post-processing includes results of the examination and revision of the model. Results: The review includes current challenges about the computational fluid dynamics. It is relevant in different areas of engineering to find answers for the problems occurring globally with the aid of a number of simulation-based software hereby, making the world free from complex problems in order to have a non-complicated scenario. Conclusion: Computational fluid dynamics are relevant in each, and every kind of problem related to the fluid flow, either existing in the human body or anywhere. In the contemporary era, there are enormous numbers of simulation-based software, which provide excellent results with just one click, thereby resolving the problems within microseconds. Hence, we cannot imagine our present and upcoming future without CFD, which has ultimately made the execution of work easier, leaving behind non-complicating scenarios. Lastly, we can conclude that “CFD is a faster, smarter, and lighter way in designing process.”


2020 ◽  
Vol 9 (1) ◽  
pp. 121-139
Author(s):  
Dita Dzata Mirrota ◽  
Desy Nailasari

An authentic assessment is carried out comprehensively to assess the learning inputs, processes and outputs. Authentic assessment must reflect real-world problems, not the world of schools. This study aims to describe the problematics of the implementation of authentic assessment in the subject of the Qur'anic Hadith. This type of research is field research. The results of this study are the implementation of authentic assessment in the subjects of the Qur'an in Hadith in the MTsN Gandusari Blitar: the implementation of authentic assessment in the Blitar Gandsari State MTs requires improvement. Problems with authentic assessment implementation: more instruments and formats, a long time, the assessment process, assessment of attitudes that require accuracy, limited educators, inputs, and considerable costs. The solution given to the problem: conduct MGMP, workshops or guidance on authentic assessment, increase the number of educators, assess according to the provisions, certain parties who give their role, and get used to assess authentically properly and correctly.


2018 ◽  
Vol 6 (3) ◽  
pp. 44 ◽  
Author(s):  
Robert Sternberg

In this article I suggest why a symposium is desirable on the topic of why, despite worldwide increases in IQ since the beginning of the 20th century, there are so many unresolved and dramatic problems in the world. I briefly discuss what some of these problems are, and the paradox of people with higher IQs not only being unable to solve them, but in some cases people being unwilling to address them. I suggest that higher IQ is not always highly relevant to the problems, and in some cases, may displace other skills that better would apply to the solution of the problems. I present a limited-resource model as an adjunct to the augmented theory of successful intelligence. The model suggests that increasing societal emphases on analytical abilities have displaced development and utilization of other skills, especially creative, practical, and wisdom-based ones, that better could be applied to serious world problems. I also discuss the importance of cognitive inoculation against unscrupulous and sometimes malevolent attempts to change belief systems.


Sign in / Sign up

Export Citation Format

Share Document