Determining the Impact of Eric Clapton on Music Using RDF Graphs

Author(s):  
Ronald P. Reck ◽  
Kenneth B. Sall ◽  
Wendy A. Swanbeck

As music is a topic of interest to many, it is no surprise that developers have applied web and semantic technology to provide various RDF datasets for describing relationships among musical artists, albums, songs, genres, and more. As avid fans of blues and rock music, we wondered if we could construct SPARQL queries to examine properties and relationships between performers in order to answer global questions such as "Who has had the greatest impact on rock music?" Our primary focus was Eric Clapton, a musical artist with a decades-spanning career who has enjoyed both a very successful solo career as well as having performed in several world-renowned bands. The application of semantic technology to a public dataset can provide useful insights into how similar approaches can be applied to realistic domain problems, such as finding relationships between persons of interest. Clearly understood semantics of available RDF properties in the dataset is of course crucial but is a substantial challenge especially when leveraging information from similar yet different data sources. This paper explores the use of DBpedia and MusicBrainz data sources using OpenLink Virtuoso Universal Server with a Drupal frontend. Much attention is given to the challenges we encountered, especially with respect to relatively large datasets of community-entered open data sources of varying quality and the strategies we employed or recommend to overcome the challenges.

Author(s):  
Kim Fridkin ◽  
Patrick Kenney

This book develops and tests the “tolerance and tactics theory of negativity.” The theory argues that citizens differ in their tolerance of negative campaigning. Also, candidates vary in the tactics used to attack their opponents, with negative messages varying in their relevance to voters and in the civility of their tone. The interplay between citizens’ tolerance of negativity and candidates’ negative messages helps clarify when negative campaigning will influence citizens’ evaluations of candidates and their likelihood of voting. A diverse set of data sources was collected from U.S. Senate elections (e.g., survey data, experiments, content analysis, focus groups) across several years to test the theory. The tolerance and tactics theory of negativity receives strong empirical validation. First, people differ systematically in their tolerance for negativity, and their tolerance changes over the course of the campaign. Second, people’s levels of tolerance consistently and powerfully influence how they assess negative messages. Third, the relevance and civility of negative messages consistently influence citizens’ assessments of candidates competing for office. That is, negative messages focusing on relevant topics and utilizing an uncivil tone produce significant changes in people’s impressions of the candidates. Furthermore, people’s tolerance of negativity influences their susceptibility to negative campaigning. Specifically, relevant and uncivil messages are most influential for people who are least tolerant of negative campaigning. The relevance and civility of campaign messages also alter people’s likelihood of voting, and the impact of negative messages on turnout is more consequential for people with less tolerance of negativity.


Epidemiologia ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 315-324
Author(s):  
Juan M. Banda ◽  
Ramya Tekumalla ◽  
Guanyu Wang ◽  
Jingyuan Yu ◽  
Tuo Liu ◽  
...  

As the COVID-19 pandemic continues to spread worldwide, an unprecedented amount of open data is being generated for medical, genetics, and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated on the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique worldwide event in biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 1.12 billion tweets, growing daily, related to COVID-19 chatter generated from 1 January 2020 to 27 June 2021 at the time of writing. This data source provides a freely available additional data source for researchers worldwide to conduct a wide and diverse number of research projects, such as epidemiological analyses, emotional and mental responses to social distancing measures, the identification of sources of misinformation, stratified measurement of sentiment towards the pandemic in near real time, among many others.


2021 ◽  
Vol 13 (15) ◽  
pp. 2869
Author(s):  
MohammadAli Hemati ◽  
Mahdi Hasanlou ◽  
Masoud Mahdianpari ◽  
Fariba Mohammadimanesh

With uninterrupted space-based data collection since 1972, Landsat plays a key role in systematic monitoring of the Earth’s surface, enabled by an extensive and free, radiometrically consistent, global archive of imagery. Governments and international organizations rely on Landsat time series for monitoring and deriving a systematic understanding of the dynamics of the Earth’s surface at a spatial scale relevant to management, scientific inquiry, and policy development. In this study, we identify trends in Landsat-informed change detection studies by surveying 50 years of published applications, processing, and change detection methods. Specifically, a representative database was created resulting in 490 relevant journal articles derived from the Web of Science and Scopus. From these articles, we provide a review of recent developments, opportunities, and trends in Landsat change detection studies. The impact of the Landsat free and open data policy in 2008 is evident in the literature as a turning point in the number and nature of change detection studies. Based upon the search terms used and articles included, average number of Landsat images used in studies increased from 10 images before 2008 to 100,000 images in 2020. The 2008 opening of the Landsat archive resulted in a marked increase in the number of images used per study, typically providing the basis for the other trends in evidence. These key trends include an increase in automated processing, use of analysis-ready data (especially those with atmospheric correction), and use of cloud computing platforms, all over increasing large areas. The nature of change methods has evolved from representative bi-temporal pairs to time series of images capturing dynamics and trends, capable of revealing both gradual and abrupt changes. The result also revealed a greater use of nonparametric classifiers for Landsat change detection analysis. Landsat-9, to be launched in September 2021, in combination with the continued operation of Landsat-8 and integration with Sentinel-2, enhances opportunities for improved monitoring of change over increasingly larger areas with greater intra- and interannual frequency.


2021 ◽  
Vol 13 (5) ◽  
pp. 905
Author(s):  
Chuyi Wu ◽  
Feng Zhang ◽  
Junshi Xia ◽  
Yichen Xu ◽  
Guoqing Li ◽  
...  

The building damage status is vital to plan rescue and reconstruction after a disaster and is also hard to detect and judge its level. Most existing studies focus on binary classification, and the attention of the model is distracted. In this study, we proposed a Siamese neural network that can localize and classify damaged buildings at one time. The main parts of this network are a variety of attention U-Nets using different backbones. The attention mechanism enables the network to pay more attention to the effective features and channels, so as to reduce the impact of useless features. We train them using the xBD dataset, which is a large-scale dataset for the advancement of building damage assessment, and compare their result balanced F (F1) scores. The score demonstrates that the performance of SEresNeXt with an attention mechanism gives the best performance, with the F1 score reaching 0.787. To improve the accuracy, we fused the results and got the best overall F1 score of 0.792. To verify the transferability and robustness of the model, we selected the dataset on the Maxar Open Data Program of two recent disasters to investigate the performance. By visual comparison, the results show that our model is robust and transferable.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1432
Author(s):  
Xwégnon Ghislain Agoua ◽  
Robin Girard ◽  
Georges Kariniotakis

The efficient integration of photovoltaic (PV) production in energy systems is conditioned by the capacity to anticipate its variability, that is, the capacity to provide accurate forecasts. From the classical forecasting methods in the state of the art dealing with a single power plant, the focus has moved in recent years to spatio-temporal approaches, where geographically dispersed data are used as input to improve forecasts of a site for the horizons up to 6 h ahead. These spatio-temporal approaches provide different performances according to the data sources available but the question of the impact of each source on the actual forecasting performance is still not evaluated. In this paper, we propose a flexible spatio-temporal model to generate PV production forecasts for horizons up to 6 h ahead and we use this model to evaluate the effect of different spatial and temporal data sources on the accuracy of the forecasts. The sources considered are measurements from neighboring PV plants, local meteorological stations, Numerical Weather Predictions, and satellite images. The evaluation of the performance is carried out using a real-world test case featuring a high number of 136 PV plants. The forecasting error has been evaluated for each data source using the Mean Absolute Error and Root Mean Square Error. The results show that neighboring PV plants help to achieve around 10% reduction in forecasting error for the first three hours, followed by satellite images which help to gain an additional 3% all over the horizons up to 6 h ahead. The NWP data show no improvement for horizons up to 6 h but is essential for greater horizons.


2021 ◽  
Author(s):  
Oliver Benning ◽  
Jonathan Calles ◽  
Burak Kantarci ◽  
Shahzad Khan

This article presents a practical method for the assessment of the risk profiles of communities by tracking / acquiring, fusing and analyzing data from public transportation, district population distribution, passenger interactions and cross-locality travel data. The proposed framework fuses these data sources into a realistic simulation of a transit network for a given time span. By shedding credible insights into the impact of public transit on pandemic spread, the research findings will help to set the groundwork for tools that could provide pandemic response teams and municipalities with a robust framework for the evaluations of city districts most at risk, and how to adjust municipal services accordingly.


2011 ◽  
Vol 2011 ◽  
pp. 1-8 ◽  
Author(s):  
Joe Nocera ◽  
Thomas W. Buford ◽  
Todd M. Manini ◽  
Kelly Naugle ◽  
Christiaan Leeuwenburgh ◽  
...  

A primary focus of longevity research is to identify prognostic risk factors that can be mediated by early treatment efforts. To date, much of this work has focused on understanding the biological processes that may contribute to aging process and age-related disease conditions. Although such processes are undoubtedly important, no current biological intervention aimed at increasing health and lifespan exists. Interestingly, a close relationship between mobility performance and the aging process has been documented in older adults. For example, recent studies have identified functional status, as assessed by walking speed, as a strong predictor of major health outcomes, including mortality, in older adults. This paper aims to describe the relationship between the comorbidities related to decreased health and lifespan and mobility function in obese, older adults. Concurrently, lifestyle interventions, including diet and exercise, are described as a means to improve mobility function and thereby limit the functional limitations associated with increased mortality.


2016 ◽  
Vol 48 (4) ◽  
pp. 161-171 ◽  
Author(s):  
Martyna Sosnowska ◽  
Izabela Karsznia

Abstract Geographic information systems (GIS) and their tools support the process of real estate trading. Of key importance is the ability to visualise information about real estate in the form of maps of average real estate transaction prices. The following study presents a methodology for mapping average real estate transaction prices using GIS. The map development process comprised three main stages. In the first stage, the input data was processed and statistically analysed. Official data came from the Register of Real Estate Prices and Values, and open data from the National Register of Boundaries. The second stage involved the visualization of the data in the form of maps of average apartment prices using the cartographic methods of choropleth maps and diagrams. The commercial tool ArcMap 10.3 and the free Quantum GIS software were used in the design of the maps of average real estate transaction prices, to check the options for using these types of programs. As a result, eight maps were designed presenting the average transaction prices for residential properties in the Warsaw district of Ursynów in 2015. The final stage was the analysis of the designed maps. The influence of the selection of the reference units on the visualization content, and the impact of combining cartographic presentation methods on the complexity of the presentation of real estate information, were also analysed.


Author(s):  
Sabrina T. Wong ◽  
Julia M. Langton ◽  
Alan Katz ◽  
Martin Fortin ◽  
Marshall Godwin ◽  
...  

AbstractAimTo describe the process by which the 12 community-based primary health care (CBPHC) research teams worked together and fostered cross-jurisdictional collaboration, including collection of common indicators with the goal of using the same measures and data sources.BackgroundA pan-Canadian mechanism for common measurement of the impact of primary care innovations across Canada is lacking. The Canadian Institutes for Health Research and its partners funded 12 teams to conduct research and collaborate on development of a set of commonly collected indicators.MethodsA working group representing the 12 teams was established. They undertook an iterative process to consider existing primary care indicators identified from the literature and by stakeholders. Indicators were agreed upon with the intention of addressing three objectives across the 12 teams: (1) describing the impact of improving access to CBPHC; (2) examining the impact of alternative models of chronic disease prevention and management in CBPHC; and (3) describing the structures and context that influence the implementation, delivery, cost, and potential for scale-up of CBPHC innovations.FindingsNineteen common indicators within the core dimensions of primary care were identified: access, comprehensiveness, coordination, effectiveness, and equity. We also agreed to collect data on health care costs and utilization within each team. Data sources include surveys, health administrative data, interviews, focus groups, and case studies. Collaboration across these teams sets the foundation for a unique opportunity for new knowledge generation, over and above any knowledge developed by any one team. Keys to success are each team’s willingness to engage and commitment to working across teams, funding to support this collaboration, and distributed leadership across the working group. Reaching consensus on collection of common indicators is challenging but achievable.


Sign in / Sign up

Export Citation Format

Share Document