A timeline toolkit for cold case investigations

2019 ◽  
Vol 10 (2) ◽  
pp. 47-63
Author(s):  
David Keatley ◽  
David D. Clarke

Purpose The purpose of this paper is to outline a variety of related methods for helping with criminal (cold) case investigations. Despite the best efforts of police investigations, many cases around the world run out of leads and go cold. While many police departments around the world have developed specialist groups and task forces, academics have also been developing new methods that can assist with investigations. Design/methodology/approach Cold cases, by their very nature, typically comprise incomplete data sets that many traditional statistical methods are not suited to. Groups of researchers have therefore developed temporal, dynamic analysis methods to offer new insights into criminal investigations. These methods are combined into a timeline toolkit and are outlined in the current paper. Findings Methods from the timeline toolkit have already been successfully applied to many cold cases, turning them back into current cases. In this paper, two real-world cold cases are analysed with methods from the timeline toolkit to provide examples of how these methods can be applied in further cold cases. Originality/value Methods from the timeline toolkit provide a novel approach to investigating current and cold cases. This review provides academics and practitioners with a guide to begin using and developing these methods and forming successful collaborations with police departments and cold case task forces. The methods are also suitable for wider groups and to use in their investigations.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fazıl Gökgöz ◽  
Engin Yalçın

Purpose This paper aims to assess the efficiency levels of World Cup teams via the slack-based data envelopment analysis (DEA) approach, which contributes to filling an important gap for performance measurement in football. Design/methodology/approach This study focuses on a comparative analysis of the past two World Cups. The authors initially estimate the efficiency of the World Cup teams via the slack-based DEA approach, which is a novel approach for sports performance measurement. The authors also present the conventional DEA results to compare results. The authors also include improvement ratios, which provide significant details for inefficient countries to enhance their efficiency. Besides, the authors include effectiveness ratings to present a complete performance overview of the World Cup teams. Findings According to the analysis results of the slack-based DEA approach, titleholder Germany and France are found as efficient teams in the 2014 and 2018 World Cup, respectively. Besides, Belgium and Russia recorded the highest efficiency improvement in the 2018 World Cup. The novel approach for sports performance measurement, the slack-based DEA approach, significantly overlaps with the actual performance of teams. Originality/value This study presents novelty in football performance by adopting the slack-based DEA with an undesirable output model for the performance measurement of the World Cup teams. This empirical analysis would be a pioneer study measuring the performance of football teams via the slack-based DEA approach.


2014 ◽  
Vol 22 (4) ◽  
pp. 358-370 ◽  
Author(s):  
John Haggerty ◽  
Sheryllynne Haggerty ◽  
Mark Taylor

Purpose – The purpose of this paper is to propose a novel approach that automates the visualisation of both quantitative data (the network) and qualitative data (the content) within emails to aid the triage of evidence during a forensics investigation. Email remains a key source of evidence during a digital investigation, and a forensics examiner may be required to triage and analyse large email data sets for evidence. Current practice utilises tools and techniques that require a manual trawl through such data, which is a time-consuming process. Design/methodology/approach – This paper applies the methodology to the Enron email corpus, and in particular one key suspect, to demonstrate the applicability of the approach. Resulting visualisations of network narratives are discussed to show how network narratives may be used to triage large evidence data sets. Findings – Using the network narrative approach enables a forensics examiner to quickly identify relevant evidence within large email data sets. Within the case study presented in this paper, the results identify key witnesses, other actors of interest to the investigation and potential sources of further evidence. Practical implications – The implications are for digital forensics examiners or for security investigations that involve email data. The approach posited in this paper demonstrates the triage and visualisation of email network narratives to aid an investigation and identify potential sources of electronic evidence. Originality/value – There are a number of network visualisation applications in use. However, none of these enable the combined visualisation of quantitative and qualitative data to provide a view of what the actors are discussing and how this shapes the network in email data sets.


Author(s):  
Fatima Isiaka ◽  
Kassim S Mwitondi ◽  
Adamu M Ibrahim

Purpose – The purpose of this paper is to proposes a forward search algorithm for detecting and identifying natural structures arising in human-computer interaction (HCI) and human physiological response (HPR) data. Design/methodology/approach – The paper portrays aspects that are essential to modelling and precision in detection. The methods involves developed algorithm for detecting outliers in data to recognise natural patterns in incessant data such as HCI-HPR data. The detected categorical data are simultaneously labelled based on the data reliance on parametric rules to predictive models used in classification algorithms. Data were also simulated based on multivariate normal distribution method and used to compare and validate the original data. Findings – Results shows that the forward search method provides robust features that are capable of repelling over-fitting in physiological and eye movement data. Research limitations/implications – One of the limitations of the robust forward search algorithm is that when the number of digits for residuals value is more than the expected size for stack flow, it normally yields an error caution; to counter this, the data sets are normally standardized by taking the logarithmic function of the model before running the algorithm. Practical implications – The authors conducted some of the experiments at individual residence which may affect environmental constraints. Originality/value – The novel approach to this method is the detection of outliers for data sets based on the Mahalanobis distances on HCI and HPR. And can also involve a large size of data with p possible parameters. The improvement made to the algorithm is application of more graphical display and rendering of the residual plot.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Thomas Scott ◽  
Charles Wellford

PurposeThis paper addresses the clearance of aggravated assaults (AAs). Specifically, the authors consider variations in these clearances over time for large agencies and test which crime, investigation and agency factors are associated with the likelihood of clearance by arrest or exceptional means. In doing this work, they seek to extend the understanding of how police can improve their investigations and ability to solve serious offenses.Design/methodology/approachUsing case, investigative and organizational data collected from seven large police departments selected on the basis of their trajectory of index crime clearances, and measures of case characteristics, investigative effort and organizational best practices, this paper uses descriptive and inferential statistics to analyze AA investigations and case clearance.FindingsKey findings include the following: trajectories of AA clearance vary across large agencies and covary with a measure of organizational best practices, and the relationship between investigative effort and case clearance can depend on organizational practices. The authors find that measures of investigative effort are either not related to case clearance or there is a negative association.Research limitations/implicationsNow that police researchers have a better understanding of AAs and their investigations, they need to test how this knowledge can be used to improve the quality of police investigations. Tests, preferably multi-agency randomized control trials, of new investigative strategies and organizational practices are needed.Originality/valueThis research is original in that it uses a multi-agency sample and crime, investigation and organizational measures to understand AA clearance.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Philip J. Cook ◽  
Anthony Berglund ◽  
Matthew Triano

PurposeThe purpose of this study is to describe the creation, implementation, activities and rationale for the Area Technology Centers (ATCs), an innovation adopted by the Chicago Police Department’s (CPD’s) Bureau of Detectives (BoD) in 2019 for the purpose of supporting investigations of crimes of serious violence by deploying specialized teams of officers to gather and process video and digital evidence.Design/methodology/approachThis case study utilizes historical information and descriptive data generated by a record-keeping system adopted by the ATCs.FindingsThe ATCs were developed as a collaboration between the CPD and the University of Chicago Crime Lab (a research center). The start-up was funded by a gift from the Griffin Foundation. Detectives have made extensive use of the services provided by the ATCs from the beginning, with the result that homicide and shooting investigations now have access to more video and digital evidence that has been processed by state-of-the-art equipment. The CPD has assumed budget responsibility for the ATCs, which is an indication of their success. The ATC teams have been assembled by voluntary transfers by sworn officers, together with an embedded analyst from the University of Chicago.Practical implicationsThe ATC model could be adopted by other large police departments. The study finds that ATCs can be effectively staffed by redeploying and training existing staff and that their operation does not require a budget increase.Social implicationsBy arguably making police investigations of shooting cases more efficient, the ATCs have the potential to increase the clearance rate and thereby prevent future gun violence.Originality/valueThe ATCs are a novel response to the challenges of securing and making good use of video and digital evidence in police investigations.


2019 ◽  
Vol 37 (1) ◽  
pp. 95-107 ◽  
Author(s):  
B.S. Shivaram ◽  
B.S. Biradar

Purpose This paper aims to examine the grey literature archiving pattern at open-access repositories with special reference to Indian open-access repositories. Design/methodology/approach The Bielefeld Academic Search Engine (BASE) was used to collect data from different document types archived by open-access repositories across the world. Data were collected by advanced search and browse features available at the BASE on document types, the number of repositories by country wise and Indian academic and research repositories. Data were tabulated using MS Excel for further analysis. Findings Findings indicated that open-access repositories across the world are primarily archiving reviewed literature. Grey literature is archived more at European and North American repositories compared to rest of the world. Reports, theses, dissertations and data sets are the major grey document types archived. In India, a significant contributor to the BASE index with 146 open-access sources, reviewed literature is the largest archived document types, and grey literature is above world average due to the presence of theses and dissertations at repositories of academic institutions. Originality/value Grey literature is considered as valuable sources of information for research and development. The study enables to get insights about the amount of grey content archived at open-access repositories. These findings can further be used to investigate the reasons/technology limitations for the lesser volume of grey content in repositories. Furthermore, this study helps to better understand the grey literature archiving pattern and need for corrective measures based on the success stories of repositories of Europe and North America.


2019 ◽  
Vol 16 (1) ◽  
pp. 79-93
Author(s):  
ELyazid Akachar ◽  
Brahim Ouhbi ◽  
Bouchra Frikh

Purpose The purpose of this paper is to present an algorithm for detecting communities in social networks. Design/methodology/approach The majority of existing methods of community detection in social networks are based on structural information, and they neglect the content information. In this paper, the authors propose a novel approach that combines the content and structure information to discover more meaningful communities in social networks. To integrate the content information in the process of community detection, the authors propose to exploit the texts involved in social networks to identify the users’ topics of interest. These topics are detected based on the statistical and semantic measures, which allow us to divide the users into different groups so that each group represents a distinct topic. Then, the authors perform links analysis in each group to discover the users who are highly interconnected (communities). Findings To validate the performance of the approach, the authors carried out a set of experiments on four real life data sets, and they compared their method with classical methods that ignore the content information. Originality/value The experimental results demonstrate that the quality of community structure is improved when we take into account the content and structure information during the procedure of community detection.


2019 ◽  
Vol 37 (3) ◽  
pp. 435-453
Author(s):  
Juncheng Wang ◽  
Guiying Li

Purpose The purpose of this study is to develop a novel region-based convolutional neural networks (R-CNN) approach that is more efficient while at least as accurate as existing R-CNN methods. In this way, the proposed method, namely R2-CNN, provides a more powerful tool for pedestrian extraction for person re-identification, which involve a huge number of images and pedestrian needs to be extracted efficiently to meet the real-time requirement. Design/methodology/approach The proposed R2-CNN is tested on two types of data sets. The first one the USC Pedestrian Detection data set, which consists of three sub-sets USC-A, UCS-B and USC-C, with respect to their characteristics. This data set is used to test the performance of R2-CNN in the pedestrian extraction task. The speed and performance of the investigated algorithms were collected. The second data set is the PASCAL VOC 2007 data set, which is a common benchmark data set for object detection. This data set was used to analyze characteristics of R2-CNN in the case of general object detection task. Findings This study proposes a novel R-CNN method that is both more efficient and more accurate than existing methods. The method, when used as an object detector, would facilitate the data preprocessing stage of person re-identification. Originality/value The study proposes a novel approach for object detection, which shows advantages in both efficiency and accuracy for pedestrian detection task. It contributes to both data preprocessing for person re-identification and the research on deep learning.


2019 ◽  
Vol 57 (8) ◽  
pp. 1937-1959 ◽  
Author(s):  
Anitha Chinnaswamy ◽  
Armando Papa ◽  
Luca Dezi ◽  
Alberto Mattiacci

Purpose The World Health Organisation estimates that 92 per cent of the world’s population does not have access to clean air. The World Bank in 2013 estimated that only air pollution (AP) was responsible for a $225bn cost in lost productivity. The purpose of this paper is to contribute to the current scholarly debate on the value of Big Data for effective healthcare management. Its focus on cardiovascular disease (CVD) in developing countries, a major cause of disability and premature death and a subject of increasing research in recent years, makes this research particularly valuable. Design/methodology/approach In order to assess the effects of AP on CVD in developing countries, the city of Bangalore was selected as a case study. Bangalore is one of the fastest growing economies in India, representative of the rapidly growing cities in the developing world. Demographic, AP and CVD data sets covering more than 1m historic records were obtained from governmental organisations. The spatial analysis of such data sets allowed visualisation of the correlation between the demographics of the city, the levels of pollution and deaths caused by CVDs, thus informing decision making in several sectors and at different levels. Findings Although there is increasing concern in councils and other responsible governmental agencies, resources required to monitor and address the challenges of pollution are limited due to the high costs involved. This research shows that with developments in the domains of Big Data, Internet of Things and smart cities, opportunities to monitor pollution result in high volumes of data. Existing technologies for data analytics can empower decision makers and even the public with knowledge on pollution. This paper has demonstrated a methodological approach for the collection and visual representation of Big Data sets allowing for an understanding of the spread of CVDs across the city of Bangalore, enabling different stakeholders to query the data sets and reveal specific statistics of key hotspots where action is required. Originality/value This research has been conducted to demonstrate the value of Big Data in generating a strategic knowledge-driven decision-support system to provide focused and targeted interventions for environmental health management. This case study research is based on the use of a geographic information system for the visualisation of a Big Data set collected from Bangalore, a region in India seriously affected by pollution.


2019 ◽  
Vol 26 (1) ◽  
pp. 93-116 ◽  
Author(s):  
Michael Minkov ◽  
Pinaki Dutt ◽  
Michael Schachner ◽  
Janar Jandosova ◽  
Yerlan Khassenbekov ◽  
...  

Purpose The purpose of this paper is to test the replicability of Hofstede’s value-based dimensions – masculinity–femininity (MAS–FEM) and individualism–collectivism (IDV–COLL) – in the field of consumer behavior, and to compare cultural prioritizations with respect to disposable income budgets across the world. Design/methodology/approach The authors asked 51,529 probabilistically selected respondents in 52 countries (50 nationally representative consumer panels and community samples from another two countries) what they would do with their money if they were rich. The questionnaire items targeted Hofstede’s MAS–FEM and IDV–COLL as well as a wider range of options deemed sufficiently meaningful, ethical and moral across the world. Findings The authors obtained two main dimensions. The first contrasts self-enhancing and altruistic choices (status and power-seeking spending vs donating for healthcare) and is conceptually similar to MAS–FEM. However, it is statistically related to Hofstede’s fifth dimension, or monumentalism–flexibility (MON–FLX), not to MAS–FEM. The second dimension contrasts conservative-collectivist choices and modern-hedonistic concerns (donating for religion and sports vs preserving nature and travel abroad for pleasure) and is a variant of COLL–IDV. Research limitations/implications The authors left out various potential consumer choices as they were deemed culturally incomparable or unacceptable in some societies. Nevertheless, the findings paint a sufficiently rich image of worldwide value differences underpinning idealized consumer behavior prioritizations. Practical implications The study could be useful to international marketing and consumer behavior experts. Social implications The study contributes to the understanding of modern cultural differences across the world. Originality/value This is the first large cross-cultural study that reveals differences in values through a novel approach: prioritizations of consumer choices. It enriches the understanding of IDV–COLL and MON–FLX, and, in particular, of the value prioritizations of the East Asian nations. The study provides new evidence that Hofstede’s MAS–FEM is a peculiarity of his IBM database with no societal analogue. Some of the so-called MAS–FEM values are components of MON–FLX, which is statistically unrelated to Hofstede’s MAS–FEM.


Sign in / Sign up

Export Citation Format

Share Document