interactive computing
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 34)

H-INDEX

12
(FIVE YEARS 2)

Author(s):  
Jason Williams

AbstractPosing complex research questions poses complex reproducibility challenges. Datasets may need to be managed over long periods of time. Reliable and secure repositories are needed for data storage. Sharing big data requires advance planning and becomes complex when collaborators are spread across institutions and countries. Many complex analyses require the larger compute resources only provided by cloud and high-performance computing infrastructure. Finally at publication, funder and publisher requirements must be met for data availability and accessibility and computational reproducibility. For all of these reasons, cloud-based cyberinfrastructures are an important component for satisfying the needs of data-intensive research. Learning how to incorporate these technologies into your research skill set will allow you to work with data analysis challenges that are often beyond the resources of individual research institutions. One of the advantages of CyVerse is that there are many solutions for high-powered analyses that do not require knowledge of command line (i.e., Linux) computing. In this chapter we will highlight CyVerse capabilities by analyzing RNA-Seq data. The lessons learned will translate to doing RNA-Seq in other computing environments and will focus on how CyVerse infrastructure supports reproducibility goals (e.g., metadata management, containers), team science (e.g., data sharing features), and flexible computing environments (e.g., interactive computing, scaling).


Author(s):  
David L. Alderson

This article describes the motivation and design for introductory coursework in computation aimed at midcareer professionals who desire to work in data science and analytics but who have little or no background in programming. In particular, we describe how we use modern interactive computing platforms to accelerate the learning of our students both in and out of the classroom. We emphasize the importance of organizing the interaction with course material so that students learn not only to “think computationally” but also to “do computationally.” We provide details of existing courses in computation offered at the Naval Postgraduate School, and we describe their ongoing evolution in response to increased demand from members of the civilian and military workforce.


2021 ◽  
Author(s):  
Ariel Rokem ◽  
Ben Dichter ◽  
Christopher Holdgraf ◽  
Satrajit S Ghosh

New technical and scientific breakthroughs are enabling neuroscientific measurements that are both wider in scope and denser in their sampling, providing views of the brain that have not been possible before. At the same time, funding initiatives, as well as scientific institutions and communities are promoting sharing of neuroscientific data. These factors are creating a deluge of neuroscience data that promises to provide new and meaningful insights into brain function. However, the size, complexity, and identifiability of the data also present challenges that arise from the difficulties in storing, accessing, processing, analyzing, visualizing and understanding data at large scale. Based on their successful adoption in the earth sciences, we have started adopting and adapting a set of tools for interactive scalable computing in neuroscience. We are building an approach that is based on a combination of a vibrant ecosystem of open-source software libraries and standards, coupled with the massive computational power of the public cloud, and served through interactive browser-based Jupyter interfaces. Together, these could provide uniform universal access to datasets for flexible and scalable exploration and analysis. We present a few prototype use-cases of this approach. We identify barriers and technical challenges that still need to be addressed to facilitate wider deployment of this approach and full exploitation of its advantages.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-25
Author(s):  
Thomas PlÖtz

With the widespread proliferation of (miniaturized) sensing facilities and the massive growth and popularity of the field of machine learning (ML) research, new frontiers in automated sensor data analysis have been explored that lead to paradigm shifts in many application domains. In fact, many practitioners now employ and rely more and more on ML methods as integral part of their sensor data analysis workflows—thereby not necessarily being ML experts or having an interest in becoming one. The availability of toolkits that can readily be used by practitioners has led to immense popularity and widespread adoption and, in essence, pragmatic use of ML methods. ML having become mainstream helps pushing the core agenda of practitioners, yet it comes with the danger of misusing methods and as such running the risk of leading to misguiding if not flawed results. Based on years of observations in the ubiquitous and interactive computing domain that extensively relies on sensors and automated sensor data analysis, and on having taught and worked with numerous students in the field, in this article I advocate a considerate use of ML methods by practitioners, i.e., non-ML experts, and elaborate on pitfalls of an overly pragmatic use of ML techniques. The article not only identifies and illustrates the most common issues, it also offers ways and practical guidelines to avoid these, which shall help practitioners to benefit from employing ML in their core research domains and applications.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-4
Author(s):  
Mathias Funk ◽  
Rong-Hao Liang ◽  
Philippe Palanque ◽  
Jun Hu ◽  
Panos Markopoulos

This issue of the Proceedings of the ACM on Human-Computer Interaction features contributions in the intersection of human-computer interaction and software engineering, with further disciplines blending into a rich set of scientific works. 2021 is the first time the annual conference on Engineering Interactive Computing Systems (EICS) is hosted in the Netherlands and in the context of an Industrial Design department. We take this opportunity to focus on the relations and influence of the design discipline on the work of the EICS community. This resulted in a new set of topics for EICS, which were already partly reflected in the many submissions we received in three extensive review rounds throughout 2020 and the beginning of 2021. In this editorial we offer a perspective on what EICS is not yet, looking at the inclusion of and interplay with design as a related discipline.


Author(s):  
Frank Appiah

Interactive computing environments consisting of screen and keyboard provides a means to relax and enjoy the program output. Leisurely, ways to slow and relax program execution is delved with system calls like delay execution, synthesis execution and file management execution. The leisure time can be the exact delay time used in slowly the chances of output activity.


Author(s):  
Pedro Rodrigues ◽  
Jose Luis Silva

Usability is very important however, it is still difficult to develop interactive computing systems that meet all user’s specificities. Help systems should be a way of bridging this gap. This paper presents a general survey on recent works (building upon previous surveys) related to improving applications’ help through demonstration and automation and, identifies which technologies are acting as enablers. The main contributions are, identifying (i) which are the recent existing solutions; (ii) which aspects must be investigated further; and (iii) which are the main difficulties that are preventing a faster progress.


Risks ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 48
Author(s):  
Ivanka Vasenska ◽  
Preslav Dimitrov ◽  
Blagovesta Koyundzhiyska-Davidkova ◽  
Vladislav Krastev ◽  
Pavol Durana ◽  
...  

In the context of current crises following COVID-19 and growing global economic uncertainties, the issues regarding financial transactions with FINTECH are increasingly apparent. Consequently, in our opinion, the utilization of FINTECH financial transactions leads to а risk-reduction approach when in contact with other people. Moreover, financial transactions with FINTECH can save up customers’ pecuniary funds. Therefore, during crises, FINTECH applications can be perceived as more competitive than the traditional banking system. All the above have provoked us to conduct research related to the utilization of financial transactions with FINTECH before and after the COVID-19 crisis outbreak. The aim of the article is to present a survey analysis of FINTECH utilization of individual customers before and after the crisis in Bulgaria. The methodology includes a questionnaire survey of 242 individual respondents. For the data processing, we implemented statistical measures and quantitative methods, including two-sample paired t-tests, Levene’s test, and ANOVAs performed through the computer language Python in a web-based interactive computing environment for creating documents, Jupyter Notebook. The findings bring out the main issues related to the implementation of financial transactions with FINTECH under the conditions of the crisis. The findings include the identification of problems related to FINTECH transactions during the COVID-19 crisis in Bulgaria.


2021 ◽  
Author(s):  
Andres Peñuela ◽  
Francesca Pianosi

<p>Reproducibility and re-usability of research requires giving access to data and numerical code but, equally importantly, helping others to understand how inputs, models and outputs are linked together. Jupyter Notebooks is a programming environment that dramatically facilitates this task, by enabling to create stronger and more transparent links between data, model and results. Within a single document where all data, code, comments and results are brought together, Jupyter Notebooks provide an interactive computing environment in which users can read, run or modify the code, and visualise the resulting outputs. In this presentation, we will explain the philosophy that we have applied for the development of interactive Jupyter Notebooks for two Python toolboxes, iRONS (a package of functions for reservoir modelling and optimisation) and SAFE (a package of functions for global sensitivity analysis). The purposes of the Jupyter Notebooks are two: some Notebooks target current users by demonstrating the key functionalities of the toolbox (‘how’ to use it), effectively replacing the technical documentation of the software; other Notebooks target potential users by demonstrating the general value of the methodologies implemented in the toolbox (‘why’ use it). In all cases, the Notebooks integrate the following features: 1) the code is written in a math-like style to make it readable to a wide variety of users, 2) they integrate interactive results visualization to facilitate the conversation between the data, the model and the user, even when the user does not have the time or expertise to read the code, 3) they can be run on the cloud by using online computational environments, such as Binder, so that they are accessible by a web browser without requiring the installation of Python. We will discuss the feedback received from users and our preliminary results of measuring the effectiveness of the Notebooks in transferring knowledge of the different modelling tasks.</p>


Sign in / Sign up

Export Citation Format

Share Document