La Calle Corredera, Jerez de la Frontera, Spain

Author(s):  
Kathryn L. Ness

“La Calle Corredera, Jerez de la Frontera, Spain” discusses the history and archaeology of Jerez de la Frontera in Andalucía, Spain and one of the three major data sets used in Setting the Table. The majority of the chapter focuses on one local eighteenth-century household site known as La Calle Corredera. It describes the artifacts from two mid eighteenth-century features, a well and trash receptacle, and the ceramics recovered from these deposits. Using COSA, this chapter examines the vessel forms that were discarded and argues that changes in tableware are indicative of broader changes in Spanish dining practices and the transition away from traditional stews toward a non-broth-based diet, possibly one that incorporated French cooking techniques.

Author(s):  
Kathryn L. Ness

“The Ponce de León and de Salas Households, St. Augustine, Florida” discusses the history and archaeology of St. Augustine, Florida and two of the three major data sets used in Setting the Table. Specifically, it focuses on the households of two wealthy, mid eighteenth-century families: the Ponce de Leóns and the de Salases. The chapter provides biographical information on the families who owned and lived on these properties and describes the material that was recovered at their properties in later archaeological excavations. It focuses on the ceramics from three eighteenth-century deposits: the trash pit and well from the Ponce de León household and a well from the de Salas property. In comparing these sites, the data appears to contradict the traditional hypothesis that wealthy Spaniards in Spanish America would have owned and displayed a significant amount of Spanish and Spanish-American goods. The chapter argues instead that wealthy individuals in this Florida town were aware of and following fashions in Spain, many of which reflected broader trends in Europe and incorporated ideas, goods, and aesthetics from England, France, and elsewhere in Europe.


Author(s):  
Kathryn L. Ness

“A Changing Spanish Identity” outlines the research questions and data sets discussed in Setting the Table by introducing the notion of early modern Spanish cultural identity and the changes it encountered in the eighteenth century. It explains the author’s use of Don Quixote as a guide through the study and why this quintessential Spanish novel is appropriate for exploring themes of cultural change and identity. The chapter argues that, despite the major role the Spanish Empire played in early modern history, it has been largely underrepresented in studies of the Atlantic world. The majority of the chapter contains a brief introduction to the three sites addressed in the study as well as the methodology used to investigate these sites. The chapter concludes with an outline of subsequent chapters.


TAPPI Journal ◽  
2016 ◽  
Vol 15 (5) ◽  
pp. 309-319
Author(s):  
JIANZHONG FU ◽  
PETER HART

The MWV mill in Covington, VA, USA, experienced a long term trend of increasing episodes of paper indents that resulted in significant quantities of internal rejects and production downtime. When traditional troubleshooting techniques failed to resolve the problem, big data analysis techniques were employed to help determine root causes of this negative and increasingly frequent situation. Nearly 6000 operating variables were selected for a deep dive, multi-year analysis after reviewing mill-wide process logs and 60000+ PI tags (data points) collected from one of the major data historian systems at the MWV Covington mill. Nine billion data points were collected from November 2011 to August 2014. Strategies and methods were developed to format, clean, classify, and sort the various data sets to compensate for process lag time and to align timestamps, as well as to rank potential causes or indicators. GE Intelligent Platforms software was employed to develop decision trees for root cause analysis. Insights and possible correlations that were previously invisible or ignored were obtained across the mill, from pulping, bleaching, and chemical recovery to the papermaking process. Several findings led the mill to revise selected process targets and to reconsider a step change in the drying process. These changes have exhibited significant impacts on the mill’s product quality, cost, and market performance. Mill-wide communications of the identified results helped transform the findings into executable actions. Several projects were initiated.


2013 ◽  
Vol 73 (3) ◽  
pp. 766-791 ◽  
Author(s):  
Kathryn Graddy

Roger de Piles (1635–1709) was a French art critic who decomposed the style and ability of 58 different artists into areas of composition, drawing, color, and expression, rating each artist on a 20-point scale in each category. Based on evidence from two data sets that together span from the mid-eighteenth century to the present, this article shows that De Piles' overall ratings have withstood the test of a very long period of time, with estimates indicating that the works of his higher-rated artists achieved both greater returns and higher critical acclaim than the works of his lower-rated artists.


Author(s):  
Anisha P. Rodrigues ◽  
Niranjan N. Chiplunkar ◽  
Roshan Fernandes

Social media is used to share the data or information among the large group of people. Numerous forums, blogs, social networks, news reports, e-commerce websites, and many more online media play a role in sharing individual opinions. The data generated from these sources is huge and in unstructured format. Big data is a term used for data sets that are large or complex and that cannot be processed by traditional processing system. Sentimental analysis is one of the major data analytics applied on big data. It is a task of natural language processing to determine whether a text contains subjective information and what information it expresses. It helps in achieving various goals like the measurement of customer satisfaction, observing public mood on political movement, movie sales prediction, market intelligence, and many more. In this chapter, the authors present various techniques used for sentimental analysis and related work using these techniques. The chapter also presents open issues and challenges in sentimental analysis landscape.


2016 ◽  
Author(s):  
Qingyu Chen ◽  
Justin Zobel ◽  
Karin Verspoor

AbstractDuplication of information in databases is a major data quality challenge. The presence of duplicates, implying either redundancy or inconsistency, can have a range of impacts on the quality of analyses that use the data. To provide a sound basis for research on this issue in databases of nucleotide sequences, we have developed new, large-scale validated collections of duplicates, which can be used to test the effectiveness of duplicate detection methods. Previous collections were either designed primarily to test efficiency, or contained only a limited number of duplicates of limited kinds. To date, duplicate detection methods have been evaluated on separate, inconsistent benchmarks, leading to results that cannot be compared and, due to limitations of the benchmarks, of questionable generality.In this study we present three nucleotide sequence database benchmarks, based on information drawn from a range of resources, including information derived from mapping to Swiss-Prot and TrEMBL. Each benchmark has distinct characteristics. We quantify these characteristics and argue for their complementary value in evaluation. The benchmarks collectively contain a vast number of validated biological duplicates; the largest has nearly half a billion duplicate pairs (although this is probably only a tiny fraction of the total that is present). They are also the first benchmarks targeting the primary nucleotide databases. The records include the 21 most heavily studied organisms in molecular biology research. Our quantitative analysis shows that duplicates in the different benchmarks, and in different organisms, have different characteristics. It is thus unreliable to evaluate duplicate detection methods against any single benchmark. For example, the benchmark derived from Swiss-Prot mappings identifies more diverse types of duplicates, showing the importance of expert curation, but is limited to coding sequences. Overall, these benchmarks form a resource that we believe will be of great value for development and evaluation of the duplicate detection methods that are required to help maintain these essential resources.Availability: The benchmark data sets are available at https://bitbucket.org/biodbqual/benchmarks.


2019 ◽  
Vol 25 (2) ◽  
pp. 97-109 ◽  
Author(s):  
Holger Döring ◽  
Sven Regel

Here, we present Party Facts ( www.partyfacts.org ), a modern online database of political parties worldwide. With this project, we provide a comprehensive database of political parties across time and world regions, link party information from some of the core social science data sets, and offer a platform to link political parties across data sets. An initial list of 4000 core parties in 212 countries is mainly based on four major data sets. The core parties in Party Facts are linked with party information from some of the key social science data sets, currently 26. From these data sets, we have included and linked about 15,000 party observations. Party Facts is an important step in developing a more coherent operationalization of political parties across time and space and a gateway to existing data sets on political parties. It allows answering innovative party research questions that require the combination of multiple data sets.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Mark Ellisman ◽  
Maryann Martone ◽  
Gabriel Soto ◽  
Eleizer Masliah ◽  
David Hessler ◽  
...  

Structurally-oriented biologists examine cells, tissues, organelles and macromolecules in order to gain insight into cellular and molecular physiology by relating structure to function. The understanding of these structures can be greatly enhanced by the use of techniques for the visualization and quantitative analysis of three-dimensional structure. Three projects from current research activities will be presented in order to illustrate both the present capabilities of computer aided techniques as well as their limitations and future possibilities.The first project concerns the three-dimensional reconstruction of the neuritic plaques found in the brains of patients with Alzheimer's disease. We have developed a software package “Synu” for investigation of 3D data sets which has been used in conjunction with laser confocal light microscopy to study the structure of the neuritic plaque. Tissue sections of autopsy samples from patients with Alzheimer's disease were double-labeled for tau, a cytoskeletal marker for abnormal neurites, and synaptophysin, a marker of presynaptic terminals.


Sign in / Sign up

Export Citation Format

Share Document