scholarly journals Citizen Science as a New Way To Do Science

2017 ◽  
Author(s):  
Marisa Ponti

This is the abstract of a talk given at the Dagstuhl Seminar 17272 - Citizen Science: Design and Engagement. Citizen science has received increasing attention because of its potential as a cost-effective method of gathering massive data sets and as a way of bridging the intellectual divide between layperson and scientists. Citizen science is not a new phenomenon, but is implemented in new ways in the digital age, offering opportunities to shape new interactions between volunteers, scientists and other stakeholders, including policymakers. Arguably, citizen science rests on two main pillars: openness and participation. However, openness can remain unexploited if we do not create the technical and social conditions for broader participation in more collaborative citizen science projects, beyond collecting and sharing data to scientists. “Public participation” has too often accounted for the assumed ease with which hierarchies in science can be horizontalized, and economic and geographic barriers can be removed. However, public participation is a contested term that should be problematized. The Scandinavian tradition of participatory design can help explore conceptually the challenges related to participation and to design for participation.

Author(s):  
A Salman Avestimehr ◽  
Seyed Mohammadreza Mousavi Kalan ◽  
Mahdi Soltanolkotabi

Abstract Dealing with the shear size and complexity of today’s massive data sets requires computational platforms that can analyze data in a parallelized and distributed fashion. A major bottleneck that arises in such modern distributed computing environments is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. A recent computational framework, called encoded optimization, creates redundancy in the data to mitigate the effect of stragglers. In this paper, we develop novel mathematical understanding for this framework demonstrating its effectiveness in much broader settings than was previously understood. We also analyze the convergence behavior of iterative encoded optimization algorithms, allowing us to characterize fundamental trade-offs between convergence rate, size of data set, accuracy, computational load (or data redundancy) and straggler toleration in this framework.


2022 ◽  
pp. 41-67
Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Machine learning (ML), neural network (NN), evolutionary algorithm (EA), fuzzy systems (FSs), as well as computer science have been very famous and very significant for many years. They have been applied to many different areas. They have contributed much to developments of many large-scale corporations, massive organizations, etc. Lots of information and massive data sets (MDSs) have been generated from these big corporations, organizations, etc. These big data sets (BDSs) have been the challenges of many commercial applications, researches, etc. Therefore, there have been many algorithms of the ML, the NN, the EA, the FSs, as well as computer science which have been developed to handle these massive data sets successfully. To support for this process, the authors have displayed all the possible algorithms of the NN for the large-scale data sets (LSDSs) successfully in this chapter. Finally, they have presented a novel model of the NN for the BDS in a sequential environment (SE) and a distributed network environment (DNE).


Author(s):  
Joseph L. Breault

The National Academy of Sciences convened in 1995 for a conference on massive data sets. The presentation on health care noted that “massive applies in several dimensions . . . the data themselves are massive, both in terms of the number of observations and also in terms of the variables . . . there are tens of thousands of indicator variables coded for each patient” (Goodall, 1995, paragraph 18). We multiply this by the number of patients in the United States, which is hundreds of millions.


2020 ◽  
Vol 10 (6) ◽  
pp. 1343-1358
Author(s):  
Ernesto Iadanza ◽  
Rachele Fabbri ◽  
Džana Bašić-ČiČak ◽  
Amedeo Amedei ◽  
Jasminka Hasic Telalovic

Abstract This article aims to provide a thorough overview of the use of Artificial Intelligence (AI) techniques in studying the gut microbiota and its role in the diagnosis and treatment of some important diseases. The association between microbiota and diseases, together with its clinical relevance, is still difficult to interpret. The advances in AI techniques, such as Machine Learning (ML) and Deep Learning (DL), can help clinicians in processing and interpreting these massive data sets. Two research groups have been involved in this Scoping Review, working in two different areas of Europe: Florence and Sarajevo. The papers included in the review describe the use of ML or DL methods applied to the study of human gut microbiota. In total, 1109 papers were considered in this study. After elimination, a final set of 16 articles was considered in the scoping review. Different AI techniques were applied in the reviewed papers. Some papers applied ML, while others applied DL techniques. 11 papers evaluated just different ML algorithms (ranging from one to eight algorithms applied to one dataset). The remaining five papers examined both ML and DL algorithms. The most applied ML algorithm was Random Forest and it also exhibited the best performances.


2017 ◽  
Vol 35 (11) ◽  
pp. 1026-1028 ◽  
Author(s):  
Martin Steinegger ◽  
Johannes Söding

Sign in / Sign up

Export Citation Format

Share Document