scholarly journals A Model of Position Effects in the Sequential Lineup

2021 ◽  
Author(s):  
John C Dunn ◽  
Matthew Philip Kaesler ◽  
Carolyn Semmler

What is the effect of placing the suspect in different positions in a sequential lineup? To explore this question, we developed and applied a model called the Independent Sequential Lineup model which analyzes a sequential lineup in terms of both identification position, the position at which the witness identifies a lineup item as the target, and target position, the position at which the target or suspect appears. We conducted a large-scale online eyewitness memory experiment with 7,204 participants each of whom was tested on a 6-item sequential lineup with an explicit stopping rule. The model fit these data well and revealed systematic effects of lineup position on underlying discriminability and response criteria. We also fit the model to data from a similar pair of experiments conducted recently by Wilson, Donnelly, Christenfeld and Wixted (2019; Journal of Memory and Language, 104, 108-125) both with and without application of a stopping rule. In all data sets, if a stopping rule is applied, underlying discriminability was found to be constant, or to increase slightly, across target position. In the absence of a stopping rule, discriminability was found to decrease substantially. We also observed a substantial increase in response criteria following presentation of the target. We discuss the implications of these findings for current theories of recognition memory and current applications of the sequential lineup in different jurisdictions.

2020 ◽  
Vol 495 (2) ◽  
pp. 1613-1640 ◽  
Author(s):  
Mehdi Rezaie ◽  
Hee-Jong Seo ◽  
Ashley J Ross ◽  
Razvan C Bunescu

ABSTRACT Robust measurements of cosmological parameters from galaxy surveys rely on our understanding of systematic effects that impact the observed galaxy density field. In this paper, we present, validate, and implement the idea of adopting the systematics mitigation method of artificial neural networks for modelling the relationship between the target galaxy density field and various observational realities including but not limited to Galactic extinction, seeing, and stellar density. Our method by construction allows a wide class of models and alleviates overtraining by performing k-fold cross-validation and dimensionality reduction via backward feature elimination. By permuting the choice of the training, validation, and test sets, we construct a selection mask for the entire footprint. We apply our method on the extended Baryon Oscillation Spectroscopic Survey (eBOSS) Emission Line Galaxies (ELGs) selection from the Dark Energy Camera Legacy Survey (DECaLS) Data Release 7 and show that the spurious large-scale contamination due to imaging systematics can be significantly reduced by up-weighting the observed galaxy density using the selection mask from the neural network and that our method is more effective than the conventional linear and quadratic polynomial functions. We perform extensive analyses on simulated mock data sets with and without systematic effects. Our analyses indicate that our methodology is more robust to overfitting compared to the conventional methods. This method can be utilized in the catalogue generation of future spectroscopic galaxy surveys such as eBOSS and Dark Energy Spectroscopic Instrument (DESI) to better mitigate observational systematics.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


GigaScience ◽  
2020 ◽  
Vol 9 (1) ◽  
Author(s):  
T Cameron Waller ◽  
Jordan A Berg ◽  
Alexander Lex ◽  
Brian E Chapman ◽  
Jared Rutter

Abstract Background Metabolic networks represent all chemical reactions that occur between molecular metabolites in an organism’s cells. They offer biological context in which to integrate, analyze, and interpret omic measurements, but their large scale and extensive connectivity present unique challenges. While it is practical to simplify these networks by placing constraints on compartments and hubs, it is unclear how these simplifications alter the structure of metabolic networks and the interpretation of metabolomic experiments. Results We curated and adapted the latest systemic model of human metabolism and developed customizable tools to define metabolic networks with and without compartmentalization in subcellular organelles and with or without inclusion of prolific metabolite hubs. Compartmentalization made networks larger, less dense, and more modular, whereas hubs made networks larger, more dense, and less modular. When present, these hubs also dominated shortest paths in the network, yet their exclusion exposed the subtler prominence of other metabolites that are typically more relevant to metabolomic experiments. We applied the non-compartmental network without metabolite hubs in a retrospective, exploratory analysis of metabolomic measurements from 5 studies on human tissues. Network clusters identified individual reactions that might experience differential regulation between experimental conditions, several of which were not apparent in the original publications. Conclusions Exclusion of specific metabolite hubs exposes modularity in both compartmental and non-compartmental metabolic networks, improving detection of relevant clusters in omic measurements. Better computational detection of metabolic network clusters in large data sets has potential to identify differential regulation of individual genes, transcripts, and proteins.


2013 ◽  
Vol 12 (6) ◽  
pp. 2858-2868 ◽  
Author(s):  
Nadin Neuhauser ◽  
Nagarjuna Nagaraj ◽  
Peter McHardy ◽  
Sara Zanivan ◽  
Richard Scheltema ◽  
...  

2012 ◽  
Vol 38 (2) ◽  
pp. 57-69 ◽  
Author(s):  
Abdulghani Hasan ◽  
Petter Pilesjö ◽  
Andreas Persson

Global change and GHG emission modelling are dependent on accurate wetness estimations for predictions of e.g. methane emissions. This study aims to quantify how the slope, drainage area and the TWI vary with the resolution of DEMs for a flat peatland area. Six DEMs with spatial resolutions from 0.5 to 90 m were interpolated with four different search radiuses. The relationship between accuracy of the DEM and the slope was tested. The LiDAR elevation data was divided into two data sets. The number of data points facilitated an evaluation dataset with data points not more than 10 mm away from the cell centre points in the interpolation dataset. The DEM was evaluated using a quantile-quantile test and the normalized median absolute deviation. It showed independence of the resolution when using the same search radius. The accuracy of the estimated elevation for different slopes was tested using the 0.5 meter DEM and it showed a higher deviation from evaluation data for steep areas. The slope estimations between resolutions showed differences with values that exceeded 50%. Drainage areas were tested for three resolutions, with coinciding evaluation points. The model ability to generate drainage area at each resolution was tested by pair wise comparison of three data subsets and showed differences of more than 50% in 25% of the evaluated points. The results show that consideration of DEM resolution is a necessity for the use of slope, drainage area and TWI data in large scale modelling.


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2018 ◽  
Vol 22 (6) ◽  
pp. 3105-3124 ◽  
Author(s):  
Zilefac Elvis Asong ◽  
Howard Simon Wheater ◽  
Barrie Bonsal ◽  
Saman Razavi ◽  
Sopan Kurkute

Abstract. Drought is a recurring extreme climate event and among the most costly natural disasters in the world. This is particularly true over Canada, where drought is both a frequent and damaging phenomenon with impacts on regional water resources, agriculture, industry, aquatic ecosystems, and health. However, nationwide drought assessments are currently lacking and impacted by limited ground-based observations. This study provides a comprehensive analysis of historical droughts over the whole of Canada, including the role of large-scale teleconnections. Drought events are characterized by the Standardized Precipitation Evapotranspiration Index (SPEI) over various temporal scales (1, 3, 6, and 12 consecutive months, 6 months from April to September, and 12 months from October to September) applied to different gridded monthly data sets for the period 1950–2013. The Mann–Kendall test, rotated empirical orthogonal function, continuous wavelet transform, and wavelet coherence analyses are used, respectively, to investigate the trend, spatio-temporal patterns, periodicity, and teleconnectivity of drought events. Results indicate that southern (northern) parts of the country experienced significant trends towards drier (wetter) conditions although substantial variability exists. Two spatially well-defined regions with different temporal evolution of droughts were identified – the Canadian Prairies and northern central Canada. The analyses also revealed the presence of a dominant periodicity of between 8 and 32 months in the Prairie region and between 8 and 40 months in the northern central region. These cycles of low-frequency variability are found to be associated principally with the Pacific–North American (PNA) and Multivariate El Niño/Southern Oscillation Index (MEI) relative to other considered large-scale climate indices. This study is the first of its kind to identify dominant periodicities in drought variability over the whole of Canada in terms of when the drought events occur, their duration, and how often they occur.


2021 ◽  
Author(s):  
Florian Betz ◽  
Magdalena Lauermann ◽  
Bernd Cyffka

<p>In fluvial geomorphology as well as in freshwater ecology, rivers are commonly seen as nested hierarchical systems functioning over a range of spatial and temporal scales. Thus, for a comprehensive assessment, information on various scales is required. Over the past decade, remote sensing based approaches have become increasingly popular in river science to increase the spatial scale of analysis. However, data-scarce areas have been mostly ignored so far despite the fact that most remaining free flowing – and thus ecologically valuable – rivers worldwide are located in regions characterized by a lack of data sources like LiDAR or even aerial imagery. High resolution satellite data would be able to fill this data gap, but tends to be too costly for large scale applications what limits the ability for comprehensive studies on river systems in such remote areas. This in turn is a limitation for management and conservation of these rivers.</p><p>In this contribution, we suggest an approach for river corridor mapping based on open access data only in order to foster large scale geomorphological mapping of river corridors in data-scarce areas. For this aim, we combine advanced terrain analysis with multispectral remote sensing using the SRTM-1 DEM along with Landsat OLI imagery. We take the Naryn River in Kyrgyzstan as an example to demonstrate the potential of these open access data sets to derive a comprehensive set of parameters for characterizing this river corridor. The methods are adapted to the specific characteristics of medium resolution open access data sets and include an innovative, fuzzy logic based approach for riparian zone delineation, longitudinal profile smoothing based on constrained quantile regression and a delineation of the active channel width as needed for specific stream power computation. In addition, an indicator for river dynamics based on Landsat time series is developed. For each derived river corridor parameter, a rigor validation is performed. The results demonstrate, that our open access approach for geomorphological mapping of river corridors is capable to provide results sufficiently accurate to derive reach averaged information. Thus, it is well suited for large scale river characterization in data-scarce regions where otherwise the river corridors would remain largely unexplored from an up-to-date riverscape perspective. Such a characterization might be an entry point for further, more detailed research in selected study reaches and can deliver the required comprehensive background information for a range of topics in river science.</p>


Sign in / Sign up

Export Citation Format

Share Document