Stratigraphic uncertainty in sparse versus rich data sets in a fluvial-deltaic outcrop analog: Ferron Notom delta in the Henry Mountains region, southern Utah

AAPG Bulletin ◽  
2012 ◽  
Vol 96 (3) ◽  
pp. 415-438 ◽  
Author(s):  
Weiguo Li ◽  
Janok P. Bhattacharya ◽  
Yijie Zhu
2012 ◽  
Vol 7 (1) ◽  
pp. 174-197 ◽  
Author(s):  
Heather Small ◽  
Kristine Kasianovitz ◽  
Ronald Blanford ◽  
Ina Celaya

Social networking sites and other social media have enabled new forms of collaborative communication and participation for users, and created additional value as rich data sets for research. Research based on accessing, mining, and analyzing social media data has risen steadily over the last several years and is increasingly multidisciplinary; researchers from the social sciences, humanities, computer science and other domains have used social media data as the basis of their studies. The broad use of this form of data has implications for how curators address preservation, access and reuse for an audience with divergent disciplinary norms related to privacy, ownership, authenticity and reliability.In this paper, we explore how the characteristics of the Twitter platform, coupled with an ambiguous and evolving understanding of privacy in networked communication, and divergent disciplinary understandings of the resulting data, combine to create complex issues for curators trying to ensure broad-based and ethical reuse of Twitter data. We provide a case study of a specific data set to illustrate how data curators can engage with the topics and questions raised in the paper. While some initial suggestions are offered to librarians and other information professionals who are beginning to receive social media data from researchers, our larger goal is to stimulate discussion and prompt additional research on the curation and preservation of social media data.


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Claire M. Gillan ◽  
Robb B. Rutledge

Improvements in understanding the neurobiological basis of mental illness have unfortunately not translated into major advances in treatment. At this point, it is clear that psychiatric disorders are exceedingly complex and that, in order to account for and leverage this complexity, we need to collect longitudinal datasets from much larger and more diverse samples than is practical using traditional methods. We discuss how smartphone-based research methods have the potential to dramatically advance our understanding of the neuroscience of mental health. This, we expect, will take the form of complementing lab-based hard neuroscience research with dense sampling of cognitive tests, clinical questionnaires, passive data from smartphone sensors, and experience-sampling data as people go about their daily lives. Theory- and data-driven approaches can help make sense of these rich data sets, and the combination of computational tools and the big data that smartphones make possible has great potential value for researchers wishing to understand how aspects of brain function give rise to, or emerge from, states of mental health and illness. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Author(s):  
Luciano Serafini ◽  
Artur d’Avila Garcez ◽  
Samy Badreddine ◽  
Ivan Donadello ◽  
Michael Spranger ◽  
...  

The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.


Galaxies ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 9 ◽  
Author(s):  
Jean-Philippe Lenain

Blazars are jetted active galactic nuclei with a jet pointing close to the line of sight, hence enhancing their intrinsic luminosity and variability. Monitoring these sources is essential in order to catch them flaring and promptly organize follow-up multi-wavelength observations, which are key to providing rich data sets used to derive e.g., the emission mechanisms at work, and the size and location of the flaring zone. In this context, the Fermi-LAT has proven to be an invaluable instrument, whose data are used to trigger many follow-up observations at high and very high energies. A few examples are illustrated here, as well as a description of different data products and pipelines, with a focus given on FLaapLUC, a tool in use within the H.E.S.S. collaboration.


Bioanalysis ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 87-98 ◽  
Author(s):  
Anhye Kim ◽  
Stephen R Dueker ◽  
Feng Dong ◽  
Ad F Roffel ◽  
Sang-won Lee ◽  
...  

Aim: Human 14C radiotracer studies provide information-rich data sets that enable informed decision making in clinical drug development. These studies are supported by liquid scintillation counting after conventional-sized 14C doses (50–200 μCi) or complex accelerator mass spectrometry (AMS) after microtracer-sized doses (∼0.1–1 μCi). Mid-infrared laser-based ‘cavity ring-down spectroscopy’ (CRDS) is an emerging platform for the sensitive quantitation of 14C tracers. Results & methodology: We compared the total 14C concentrations in plasma and urine samples from a microtracer study using both CRDS and AMS technology. The data were evaluated using statistical and pharmacokinetic modeling. Conclusion: The CRDS method closely reproduced the AMS method for total 14C concentrations. With optimization of the automated sample interface and further testing, it promises to be an accessible, robust system for pivotal microtracer investigations


2019 ◽  
Vol 487 (3) ◽  
pp. 4037-4056 ◽  
Author(s):  
Luca Di Mascolo ◽  
Eugene Churazov ◽  
Tony Mroczkowski

ABSTRACT We report the joint analysis of single-dish and interferometric observations of the Sunyaev–Zeldovich (SZ) effect from the galaxy cluster RX J1347.5−1145. We have developed a parametric fitting procedure that uses native imaging and visibility data, and tested it using the rich data sets from ALMA, Bolocam, and Planck available for this object. RX J1347.5−1145 is a very hot and luminous cluster showing signatures of a merger. Previous X-ray-motivated SZ studies have highlighted the presence of an excess SZ signal south-east of the X-ray peak, which was generally interpreted as a strong shock-induced pressure perturbation. Our model, when centred at the X-ray peak, confirms this. However, the presence of two almost equally bright giant elliptical galaxies separated by ∼100 kpc makes the choice of the cluster centre ambiguous, and allows for considerable freedom in modelling the structure of the galaxy cluster. For instance, we have shown that the SZ signal can be well described by a single smooth ellipsoidal generalized Navarro–Frenk–White profile, where the best-fitting centroid is located between the two brightest cluster galaxies. This leads to a considerably weaker excess SZ signal from the south-eastern substructure. Further, the most prominent features seen in the X-ray can be explained as predominantly isobaric structures, alleviating the need for highly supersonic velocities, although overpressurized regions associated with the moving subhaloes are still present in our model.


2019 ◽  
Vol 2 (2) ◽  
pp. 169-187 ◽  
Author(s):  
Ruben C. Arslan

Data documentation in psychology lags behind not only many other disciplines, but also basic standards of usefulness. Psychological scientists often prefer to invest the time and effort that would be necessary to document existing data well in other duties, such as writing and collecting more data. Codebooks therefore tend to be unstandardized and stored in proprietary formats, and they are rarely properly indexed in search engines. This means that rich data sets are sometimes used only once—by their creators—and left to disappear into oblivion. Even if they can find an existing data set, researchers are unlikely to publish analyses based on it if they cannot be confident that they understand it well enough. My codebook package makes it easier to generate rich metadata in human- and machine-readable codebooks. It uses metadata from existing sources and automates some tedious tasks, such as documenting psychological scales and reliabilities, summarizing descriptive statistics, and identifying patterns of missingness. The codebook R package and Web app make it possible to generate a rich codebook in a few minutes and just three clicks. Over time, its use could lead to psychological data becoming findable, accessible, interoperable, and reusable, thereby reducing research waste and benefiting both its users and the scientific community as a whole.


2019 ◽  
Author(s):  
Carlos Siordia ◽  
Ophra Leyser-Whalen

Previous work argues that confidentiality is compromised by using an individual’s sex, full date of birth, and US zip code. With use of the American Community Survey we test this assumption while maintaining participant confidentiality to study how timing of births vary by season, region, race/ethnicity, origin, sex, and birth cohort. We found that region and demographic factors help explain the likelihood for giving birth in warm months, which provides evidence contrary to the birth-rate temporal-homogeneity assumption.


2015 ◽  
Vol 7 (4) ◽  
pp. 35-52 ◽  
Author(s):  
Edel Jennings ◽  
Mark Roddy ◽  
Alexander J. Leckey ◽  
Guy Feigenblat

Mobile and social computing is rapidly evolving towards a deeper integration with the physical world due to the proliferation of smart connected objects. It is widely acknowledged that involving end users in the design, development and evaluation of applications that function within the resulting complex socio-technical systems is crucial. However, reliable methods for managing evaluation of medium fidelity prototypes, whose utility is often dependent on rich data sets and/or the presence of multiple users simultaneously engaging in multiple activities, have not yet emerged. The authors report on the use of scripted role-play as an experimental approach applied in a mixed-methods evaluation of early prototypes of a suite of professional networking applications targeting a conference attendance scenario. Their evaluation was significantly constrained by the limited availability of a small cohort of end users for a relatively short period of time, which pose a challenge to define interactions that would ensure these users could experience and understand the novel application features. The authors observed that participatory role-play facilitated deeper user engagement with, exploration of, and discussion about, the mobile social applications than would have been possible with traditional usability approaches given the small user cohort and the time-constrained conditions.


Author(s):  
Ignacio Palacios-Huerta

A wealth of research in recent decades has seen the economic approach to human behavior extended over many areas previously considered to belong to sociology, political science, law, and other fields. Research has also shown that economics can provide insight into many aspects of sports, including soccer. This book uses soccer to test economic theories and document novel human behavior, thus illuminating economics through the world's most popular sport. The book offers unique and often startling insights into game theory and microeconomics, covering topics such as mixed strategies, discrimination, incentives, and human preferences. It also looks at finance, experimental economics, behavioral economics, and neuroeconomics. The book provides rich data sets and environments that shed light on universal economic principles in interesting and useful ways. It is essential reading for students, researchers, and sports enthusiasts as it shows what soccer can do for economics.


Sign in / Sign up

Export Citation Format

Share Document