Social data collection and analyses

2021 ◽  
pp. 53-76
Author(s):  
Marie J. E. Charpentier ◽  
Marie Pelé ◽  
Julien P. Renoult ◽  
Cédric Sueur

Sampling accurate and quantitative behavioural data requires the description of fine-grained patterns of social relationships and/or spatial associations, which is highly challenging, especially in natural environments. Although behavioural ecologists have tackled systematic studies on animals’ societies since the nineteenth century, new biologging technologies have the potential to revolutionise the sampling of animals’ social relationships. However, the tremendous quantity of data sampled and the diversity of biologgers (such as proximity loggers) currently available that allow the sampling of a large array of biological and physiological data bring new analytical challenges. The high spatiotemporal resolution of data needed when studying social processes, such as disease or information diffusion, requires new analytical tools, such as social network analyses, developed to analyse large data sets. The quantity and quality of the data now available on a large array of social systems bring undiscovered outputs, consistently opening new and exciting research avenues.

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 52 ◽  
Author(s):  
Brian D. O'Connor ◽  
Denis Yuen ◽  
Vincent Chung ◽  
Andrew G. Duncan ◽  
Xiang Kun Liu ◽  
...  

As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore (https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).


2021 ◽  
Vol 16 (1) ◽  
pp. 117-144
Author(s):  
Michał Bednarczyk

This paper describes JupyQgis – a new Python library for Jupyteo IDE enabling interoperability with the QGIS system. Jupyteo is an online integrated development environment for earth observation data processing and is available on a cloud platform. It is targeted at remote sensing experts, scientists and users who can develop the Jupyter notebook by reusing embedded open-source tools, WPS interfaces and existing notebooks. In recent years, there has been an increasing popularity of data science methods that have become the focus of many organizations. Many scientific disciplines are facing a significant transformation due to data-driven solutions. This is especially true of geodesy, environmental sciences, and Earth sciences, where large data sets, such as Earth observation satellite data (EO data) and GIS data are used. The previous experience in using Jupyteo, both among the users of this platform and its creators, indicates the need to supplement its functionality with GIS analytical tools. This study analyzed the most efficient way to combine the functionality of the QGIS system with the functionality of the Jupyteo platform in one tool. It was found that the most suitable solution is to create a custom library providing an API for collaboration between both environments. The resulting library makes the work much easier and simplifies the source code of the created Python scripts. The functionality of the developed solution was illustrated with a test use case.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


Author(s):  
Parag A Pathade ◽  
Vinod A Bairagi ◽  
Yogesh S. Ahire ◽  
Neela M Bhatia

‘‘Proteomics’’, is the emerging technology leading to high-throughput identification and understanding of proteins. Proteomics is the protein equivalent of genomics and has captured the imagination of biomolecular scientists, worldwide. Because proteome reveals more accurately the dynamic state of a cell, tissue, or organism, much is expected from proteomics to indicate better disease markers for diagnosis and therapy monitoring. Proteomics is expected to play a major role in biomedical research, and it will have a significant impact on the development of diagnostics and therapeutics for cancer, heart ailments and infectious diseases, in future. Proteomics research leads to the identification of new protein markers for diagnostic purposes and novel molecular targets for drug discovery.  Though the potential is great, many challenges and issues remain to be solved, such as gene expression, peptides, generation of low abundant proteins, analytical tools, drug target discovery and cost. A systematic and efficient analysis of vast genomic and proteomic data sets is a major challenge for researchers, today. Nevertheless, proteomics is the groundwork for constructing and extracting useful comprehension to biomedical research. This review article covers some opportunities and challenges offered by proteomics.   


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document