Challenges and Opportunities in Collaborative Vulnerability Research Workflows

Author(s):  
Ryan Mullins ◽  
Deirdre Kelliher ◽  
Ben Nargi ◽  
Mike Keeney ◽  
Nathan Schurr

Recently, cyber reasoning systems demonstrated near-human performance characteristics when they autonomously identified, proved, and mitigated vulnerabilities in software during a competitive event. New research seeks to augment human vulnerability research teams with cyber reasoning system teammates in collaborative work environments. However, the literature lacks a concrete understanding of vulnerability research workflows and practices, limiting designers’, engineers’, and researchers’ ability to successfully integrate these artificially intelligent entities into teams. This paper contributes a general workflow model of the vulnerability research process, and identifies specific collaboration challenges and opportunities anchored in this model. Contributions were derived from a qualitative field study of work habits, behaviors, and practices of human vulnerability research teams. These contributions will inform future work in the vulnerability research domain by establishing an empirically-driven workflow model that can be adapted to specific organizational and functional constraints placed on individual and teams.

2020 ◽  
Author(s):  
Miles D Witham ◽  
Eleanor Anderson ◽  
Camille Carroll ◽  
Paul M Dark ◽  
Kim Down ◽  
...  

Abstract Background Participants in clinical research studies often do not reflect the populations for which healthcare interventions are needed or will be used. Enhancing representation of underserved groups in clinical research is important to ensure that research findings are widely applicable. We describe a multicomponent workstream project to improve representation of underserved groups in clinical trials.Methods The project comprised three main strands: 1) a targeted scoping review of literature to identify previous work characterising underserved groups and barriers to inclusion; 2) surveys of professional stakeholders and participant representative groups involved in research delivery to refine these initial findings and identify examples of innovation and good practice; 3) a series of workshops bringing together key stakeholders from funding, design, delivery and participant groups to reach consensus on definitions, barriers and a strategic roadmap for future work. The work was commissioned by the UK National Institute for Health Research Clinical Research Network. Output from these strands was integrated by a steering committee to generate a series of goals, workstream plans, and a strategic roadmap for future development work in this area.Results ‘Underserved groups’ was identified and agreed by the stakeholder group as the preferred term. Three-quarters of stakeholders felt that a clear definition of underserved groups did not currently exist; definition was challenging and context-specific but exemplar groups were identified as underserved. Barriers to successful inclusion of underserved groups could be clustered into: communication between research teams and participant groups; how trials are designed and delivered, differing agendas of research teams and participant groups; and lack of trust in the research process. Four key goals for future work were identified: building long-term relationships with underserved groups; developing training resources to improve design and delivery of trials for underserved groups; developing infrastructure and systems to support this work, and working with funders, regulators and other stakeholders to remove barriers to inclusion.Conclusions The work of the INCLUDE group over the next 12 months will build on these findings by generating resources customised for different underserved groups to improve the representativeness of trial populations.


2020 ◽  
Author(s):  
Miles D Witham ◽  
Eleanor Anderson ◽  
Camille Carroll ◽  
Paul M Dark ◽  
Kim Down ◽  
...  

Abstract Background Participants in clinical research studies often do not reflect the populations for which healthcare interventions are needed or will be used. Enhancing representation of underserved groups in clinical research is important to ensure that research findings are widely applicable. We describe a multicomponent workstream project to improve representation of underserved groups in clinical trials. Methods The project comprised three main strands: 1) a targeted scoping review of literature to identify previous work characterising underserved groups and barriers to inclusion; 2) surveys of professional stakeholders and participant representative groups involved in research delivery to refine these initial findings and identify examples of innovation and good practice; 3) a series of workshops bringing together key stakeholders from funding, design, delivery and participant groups to reach consensus on definitions, barriers and a strategic roadmap for future work. The work was commissioned by the UK National Institute for Health Research Clinical Research Network. Output from these strands was integrated by a steering committee to generate a series of goals, workstream plans, and a strategic roadmap for future development work in this area. Results ‘Underserved groups’ was identified and agreed by the stakeholder group as the preferred term. Three-quarters of stakeholders felt that a clear definition of underserved groups did not currently exist; definition was challenging and context-specific but exemplar groups (e.g. those with language barriers or mental illness) were identified as underserved. Barriers to successful inclusion of underserved groups could be clustered into: communication between research teams and participant groups; how trials are designed and delivered, differing agendas of research teams and participant groups; and lack of trust in the research process. Four key goals for future work were identified: building long-term relationships with underserved groups; developing training resources to improve design and delivery of trials for underserved groups; developing infrastructure and systems to support this work, and working with funders, regulators and other stakeholders to remove barriers to inclusion. Conclusions The work of the INCLUDE group over the next 12 months will build on these findings by generating resources customised for different underserved groups to improve the representativeness of trial populations.


2020 ◽  
Vol 17 (2-3) ◽  
Author(s):  
Dagmar Waltemath ◽  
Martin Golebiewski ◽  
Michael L Blinov ◽  
Padraig Gleeson ◽  
Henning Hermjakob ◽  
...  

AbstractThis paper presents a report on outcomes of the 10th Computational Modeling in Biology Network (COMBINE) meeting that was held in Heidelberg, Germany, in July of 2019. The annual event brings together researchers, biocurators and software engineers to present recent results and discuss future work in the area of standards for systems and synthetic biology. The COMBINE initiative coordinates the development of various community standards and formats for computational models in the life sciences. Over the past 10 years, COMBINE has brought together standard communities that have further developed and harmonized their standards for better interoperability of models and data. COMBINE 2019 was co-located with a stakeholder workshop of the European EU-STANDS4PM initiative that aims at harmonized data and model standardization for in silico models in the field of personalized medicine, as well as with the FAIRDOM PALs meeting to discuss findable, accessible, interoperable and reusable (FAIR) data sharing. This report briefly describes the work discussed in invited and contributed talks as well as during breakout sessions. It also highlights recent advancements in data, model, and annotation standardization efforts. Finally, this report concludes with some challenges and opportunities that this community will face during the next 10 years.


2021 ◽  
Vol 11 (2) ◽  
pp. 740
Author(s):  
Krzysztof Zatwarnicki ◽  
Waldemar Pokuta ◽  
Anna Bryniarska ◽  
Anna Zatwarnicka ◽  
Andrzej Metelski ◽  
...  

Artificial intelligence has been developed since the beginning of IT systems. Today there are many AI techniques that are successfully applied. Most of the AI field is, however, concerned with the so-called “narrow AI” demonstrating intelligence only in specialized areas. There is a need to work on general AI solutions that would constitute a framework enabling the integration of already developed narrow solutions and contribute to solving general problems. In this work, we present a new language that potentially can become a base for building intelligent systems of general purpose in the future. This language is called the General Environment Description Language (GEDL). We present the motivation for our research based on the other works in the field. Furthermore, there is an overall description of the idea and basic definitions of elements of the language. We also present an example of the GEDL language usage in the JSON notation. The example shows how to store the knowledge and define the problem to be solved, and the solution to the problem itself. In the end, we present potential fields of application and future work. This article is an introduction to new research in the field of Artificial General Intelligence.


2003 ◽  
Vol 12 (3) ◽  
pp. 311-325 ◽  
Author(s):  
Martin R. Stytz ◽  
Sheila B. Banks

The development of computer-generated synthetic environments, also calleddistributed virtual environments, for military simulation relies heavily upon computer-generated actors (CGAs) to provide accurate behaviors at reasonable cost so that the synthetic environments are useful, affordable, complex, and realistic. Unfortunately, the pace of synthetic environment development and the level of desired CGA performance continue to rise at a much faster rate than CGA capability improvements. This insatiable demand for realism in CGAs for synthetic environments arises from the growing understanding of the significant role that modeling and simulation can play in a variety of venues. These uses include training, analysis, procurement decisions, mission rehearsal, doctrine development, force-level and task-level training, information assurance, cyberwarfare, force structure analysis, sustainability analysis, life cycle costs analysis, material management, infrastructure analysis, and many others. In these and other uses of military synthetic environments, computer-generated actors play a central role because they have the potential to increase the realism of the environment while also reducing the cost of operating the environment. The progress made in addressing the technical challenges that must be overcome to realize effective and realistic CGAs for military simulation environments and the technical areas that should be the focus of future work are the subject of this series of papers, which survey the technologies and progress made in the construction and use of CGAs. In this, the first installment in the series of three papers, we introduce the topic of computer-generated actors and issues related to their performance and fidelity and other background information for this research area as related to military simulation. We also discuss CGA reasoning system techniques and architectures.


2009 ◽  
Vol 18 (6) ◽  
pp. 449-467 ◽  
Author(s):  
Joel C Huegel ◽  
Ozkan Celik ◽  
Ali Israr ◽  
Marcia K O'Malley

This paper introduces and validates quantitative performance measures for a rhythmic target-hitting task. These performance measures are derived from a detailed analysis of human performance during a month-long training experiment where participants learned to operate a 2-DOF haptic interface in a virtual environment to execute a manual control task. The motivation for the analysis presented in this paper is to determine measures of participant performance that capture the key skills of the task. This analysis of performance indicates that two quantitative measures—trajectory error and input frequency—capture the key skills of the target-hitting task, as the results show a strong correlation between the performance measures and the task objective of maximizing target hits. The performance trends were further explored by grouping the participants based on expertise and examining trends during training in terms of these measures. In future work, these measures will be used as inputs to a haptic guidance scheme that adjusts its control gains based on a real-time assessment of human performance of the task. Such guidance schemes will be incorporated into virtual training environments for humans to develop manual skills for domains such as surgery, physical therapy, and sports.


Author(s):  
Craig A. Stewart ◽  
Richard Knepper ◽  
Matthew R. Link ◽  
Marlon Pierce ◽  
Eric Wernert ◽  
...  

Computers accelerate our ability to achieve scientific breakthroughs. As technology evolves and new research needs come to light, the role for cyberinfrastructure as “knowledge” infrastructure continues to expand. In essence, cyberinfrastructure can be thought of as the integration of supercomputers, data resources, visualization, and people that extends the impact and utility of information technology. This article discusses cyberinfrastructure, the related topics of science gateways and campus bridging, and identifies future challenges and opportunities in cyberinfrastructure.


2021 ◽  
Author(s):  
Maria Vatshaug Ottermo ◽  
Knut Steinar Bjørkevoll ◽  
Tor Onshus

Abstract Automation of managed pressure drilling (MPD) has a big potential for improving consistency, efficiency, and safety of operations, and is therefore pursued by many actors. While this development mitigates many risk elements, it also adds some related to for example mathematical algorithms and remote access. This work is based on document reviews, interviews, and working sessions with the industry, and adds insight on how risks associated with moving to higher levels of automation of MPD can be mitigated to a level where benefits are significantly larger than the sum of added risks. The work has resulted in many recommendations for the industry, where most were related to testing, verification, and validation of the models and data inputs, as well as meaningful human control and employing a holistic approach when introducing new models. Recommendations were also given to the Petroleum Safety Authority Norway. These were related to missing or inadequate use of standards by the industry, lack of ICT knowledge, and encouraging increased experience sharing. Future work should address how to enable meaningful human control as models become more complex or to a larger extent is based on empirical data and artificial intelligence as opposed to models based on first principles. Human control is important in unexpected situations in which the system fails to act safely. There is also a need to address ICT security issues arising when remote operation becomes more common.


Author(s):  
Michelle Kowalsky ◽  
Bruce Whitham

This chapter reviews the current literature on the types of social media practices in college and university libraries, and suggests some new strategic agendas for utilizing these tools for teaching and learning about the research process, as well as other means to connect libraries to their users. Library educators continually hope to “meet students where they are” and use social media to “push” library content toward interested or potential university patrons. One new way to improve engagement and “pull” patrons toward an understanding of the usefulness of licensed resources and expert research help is through the channels of social media. By enhancing awareness of library resources at the point of need, and through existing social relationships between library users and their friends, libraries can encourage peer interaction around new research methods and tools as they emerge, while increasing the use of library materials (both online and within the library facility) in new and different ways.


2020 ◽  
Vol 12 (1) ◽  
Author(s):  
Syed Shah Sultan Mohiuddin Qadri ◽  
Mahmut Ali Gökçe ◽  
Erdinç Öner

Abstract Introduction Due to the menacing increase in the number of vehicles on a daily basis, abating road congestion is becoming a key challenge these years. To cope-up with the prevailing traffic scenarios and to meet the ever-increasing demand for traffic, the urban transportation system needs effective solution methodologies. Changes made in the urban infrastructure will take years, sometimes may not even be feasible. For this reason, traffic signal timing (TST) optimization is one of the fastest and most economical ways to curtail congestion at the intersections and improve traffic flow in the urban network. Purpose Researchers have been working on using a variety of approaches along with the exploitation of technology to improve TST. This article is intended to analyze the recent literature published between January 2015 and January 2020 for the computational intelligence (CI) based simulation approaches and CI-based approaches for optimizing TST and Traffic Signal Control (TSC) systems, provide insights, research gaps and possible directions for future work for researchers interested in the field. Methods In analyzing the complex dynamic behavior of traffic streams, simulation tools have a prominent place. Nowadays, microsimulation tools are frequently used in TST related researches. For this reason, a critical review of some of the widely used microsimulation packages is provided in this paper. Conclusion Our review also shows that approximately 77% of the papers included, utilizes a microsimulation tool in some form. Therefore, it seems useful to include a review, categorization, and comparison of the most commonly used microsimulation tools for future work. We conclude by providing insights into the future of research in these areas.


Sign in / Sign up

Export Citation Format

Share Document