scholarly journals A machine learning analysis of serious misconduct among Australian police

Crime Science ◽  
2020 ◽  
Vol 9 (1) ◽  
Author(s):  
Timothy I. C. Cubitt ◽  
Ken R. Wooden ◽  
Karl A. Roberts

Abstract Fairness in policing, driven by the effective and transparent investigation and remediation of police misconduct, is vital to maintaining the legitimacy of policing agencies, and the capacity for police to function within society. Research into police misconduct in Australia has traditionally been performed on an ad-hoc basis, with limited access to law enforcement data. This research seeks to identify the antecedents of serious police misconduct, resulting in the dismissal or criminal charge of officers, among a large police misconduct dataset. Demographic and misconduct data were sourced for a sample of 600 officers who have committed instances of serious misconduct, and a matched sample of 600 comparison officers across a 13-year period. A machine learning analysis, random forest, was utilised to produce a robust predictive model, with Partial Dependence Plots employed to demonstrate within variable interaction with serious misconduct. Prior instances of serious misconduct were particularly predictive of further serious misconduct, while misconduct was most prominent around mid-career. Secondary employment, and performance issues were important predictors, while demographic variables typically outperformed complaint variables. This research suggests that serious misconduct is similarly prevalent among experienced officers, as it is junior officers, while secondary employment is an important indicator of misconduct risk. Findings provide guidance for misconduct prevention policy among policing agencies.

Author(s):  
Dhurgham Al-Karawi ◽  
Shakir Al-Zaidi ◽  
Nisreen Polus ◽  
Sabah Jassim

AbstractThis paper reports on the development and performance of machine learning schemes for the analysis of Chest CT Scan images of Coronavirus COVID-19 patients and demonstrates significant success in efficiently and automatically testing for COVID-19 infection. In particular, an innovative frequency domain algorithm, to be called FFT-Gabor scheme, will be shown to predict in almost real-time the state of the patient with an average accuracy of 95.37%, sensitivity 95.99% and specificity 94.76%. The FFT-Gabor scheme is adequately informative in that clinicians can visually examine the FFT-Gabor feature to support their final diagnostic.Key StrengthsThe proposed FFT-Gabor scheme is an automatic machine learning scheme that works in real time and achieves significantly high accuracy with very low false negative, and can provide supporting evidences of the predicted decision by visually displaying the final features upon which decision is made. This scheme will be most beneficial when used in addition to the RT-PCR swab test of non-symptomatic cases.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Timothy I.C. Cubitt ◽  
Philip Birch

PurposeThere is a paucity of data available relating to the misconduct of police officers in larger policing agencies, typically resulting in case study approaches and limited insight into the factors associated with serious misconduct. This paper seeks to contribute to the emerging knowledge base on police misconduct through analysis of 28,429 complaints among 3,830 officers in the New York Police Department, between 2000 and 2019.Design/methodology/approachThis study utilized a data set consisting of officer and complainant demographics, and officer complaint records. Machine learning analytics were employed, specifically random forest, to consider which variables were most associated with serious misconduct among officers that committed misconduct. Partial dependence plots were employed among variables identified as important to consider the points at which misconduct was most, and least likely to occur.FindingsPrior instances of serious misconduct were particularly associated with further instances of serious misconduct, while remedial action did not appear to have an impact in preventing further misconduct. Inexperience, both in rank and age, was associated with misconduct. Specific prior complaints, such as minor use of force, did not appear to be particularly associated with instances of serious misconduct. The characteristics of the complainant held more importance than the characteristics of the officer.Originality/valueThe ability to analyze a data set of this size is unusual and important to progressing the knowledge area regarding police misconduct. This study contributes to the growing use of machine learning in understanding the police misconduct environment, and more accurately tailoring misconduct prevention policy and practice.


2021 ◽  
Vol 14 (3) ◽  
pp. 101016 ◽  
Author(s):  
Jim Abraham ◽  
Amy B. Heimberger ◽  
John Marshall ◽  
Elisabeth Heath ◽  
Joseph Drabick ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1342
Author(s):  
Borja Nogales ◽  
Miguel Silva ◽  
Ivan Vidal ◽  
Miguel Luís ◽  
Francisco Valera ◽  
...  

5G communications have become an enabler for the creation of new and more complex networking scenarios, bringing together different vertical ecosystems. Such behavior has been fostered by the network function virtualization (NFV) concept, where the orchestration and virtualization capabilities allow the possibility of dynamically supplying network resources according to its needs. Nevertheless, the integration and performance of heterogeneous network environments, each one supported by a different provider, and with specific characteristics and requirements, in a single NFV framework is not straightforward. In this work we propose an NFV-based framework capable of supporting the flexible, cost-effective deployment of vertical services, through the integration of two distinguished mobile environments and their networks: small sized unmanned aerial vehicles (SUAVs), supporting a flying ad hoc network (FANET) and vehicles, promoting a vehicular ad hoc network (VANET). In this context, a use case involving the public safety vertical will be used as an illustrative example to showcase the potential of this framework. This work also includes the technical implementation details of the framework proposed, allowing to analyse and discuss the delays on the network services deployment process. The results show that the deployment times can be significantly reduced through a distributed VNF configuration function based on the publish–subscribe model.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Peter Baumann ◽  
Dimitar Misev ◽  
Vlad Merticariu ◽  
Bang Pham Huu

AbstractMulti-dimensional arrays (also known as raster data or gridded data) play a key role in many, if not all science and engineering domains where they typically represent spatio-temporal sensor, image, simulation output, or statistics “datacubes”. As classic database technology does not support arrays adequately, such data today are maintained mostly in silo solutions, with architectures that tend to erode and not keep up with the increasing requirements on performance and service quality. Array Database systems attempt to close this gap by providing declarative query support for flexible ad-hoc analytics on large n-D arrays, similar to what SQL offers on set-oriented data, XQuery on hierarchical data, and SPARQL and CIPHER on graph data. Today, Petascale Array Database installations exist, employing massive parallelism and distributed processing. Hence, questions arise about technology and standards available, usability, and overall maturity. Several papers have compared models and formalisms, and benchmarks have been undertaken as well, typically comparing two systems against each other. While each of these represent valuable research to the best of our knowledge there is no comprehensive survey combining model, query language, architecture, and practical usability, and performance aspects. The size of this comparison differentiates our study as well with 19 systems compared, four benchmarked to an extent and depth clearly exceeding previous papers in the field; for example, subsetting tests were designed in a way that systems cannot be tuned to specifically these queries. It is hoped that this gives a representative overview to all who want to immerse into the field as well as a clear guidance to those who need to choose the best suited datacube tool for their application. This article presents results of the Research Data Alliance (RDA) Array Database Assessment Working Group (ADA:WG), a subgroup of the Big Data Interest Group. It has elicited the state of the art in Array Databases, technically supported by IEEE GRSS and CODATA Germany, to answer the question: how can data scientists and engineers benefit from Array Database technology? As it turns out, Array Databases can offer significant advantages in terms of flexibility, functionality, extensibility, as well as performance and scalability—in total, the database approach of offering “datacubes” analysis-ready heralds a new level of service quality. Investigation shows that there is a lively ecosystem of technology with increasing uptake, and proven array analytics standards are in place. Consequently, such approaches have to be considered a serious option for datacube services in science, engineering and beyond. Tools, though, vary greatly in functionality and performance as it turns out.


Author(s):  
Dhiraj J. Pangal ◽  
Guillaume Kugener ◽  
Shane Shahrestani ◽  
Frank Attenello ◽  
Gabriel Zada ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document