scholarly journals Improving police transparency in Canada

2021 ◽  
Vol 6 (2) ◽  
pp. 71-74
Author(s):  
Lance Valcour

The path to improved police transparency in Canada includes the use of advanced technology with capabilities such as artificial intelligence, machine learning, “cloud” enabled services, and an ever-increasing number of data collection and management tools. However, these innovations need to be closely linked with a national—not federal—stakeholder review of current legal, legislative, and privacy frameworks. This article provides readers with a high-level overview of the issue of police transparency in Canada. It then outlines a number of key challenges and opportunities for improving this transparency. It concludes with a call to action for key Canadian stakeholders to work collaboratively to improve police transparency in Canada.

2019 ◽  
Author(s):  
Xia Huiyi ◽  
◽  
Nankai Xia ◽  
Liu Liu ◽  
◽  
...  

With the development of urbanization and the continuous development, construction and renewal of the city, the living environment of human beings has also undergone tremendous changes, such as residential community environment and service facilities, urban roads and street spaces, and urban public service formats. And the layout of the facilities, etc., and these are the real needs of people in urban life, but the characteristics of these needs or their problems will inevitably have a certain impact on the user's psychological feelings, thus affecting people's use needs. Then, studying the ways in which urban residents perceive changes in the living environment and how they perceive changes in psychology and emotions will have practical significance and can effectively assist urban management and builders to optimize the living environment of residents. This is also the long-term. One of the topics of greatest interest to urban researchers since then. In the theory of demand hierarchy proposed by American psychologist Abraham Maslow, safety is the basic requirement second only to physiological needs. So safety, especially psychological security, has become one of the basic needs of people in the urban environment. People's perception of the psychological security of the urban environment is also one of the most important indicators in urban environmental assessment. In the past, due to the influence of technical means, the study of urban environmental psychological security often relied on the limited investigation of a small number of respondents. Low-density data is difficult to measure the perceptual results of universality. With the leaping development of the mobile Internet, Internet image data has grown geometrically over time. And with the development of artificial intelligence technology in recent years, image recognition and perception analysis based on machine learning has become possible. The maturity of these technical conditions provides a basis for the study of the urban renewal index evaluation system based on psychological security. In addition to the existing urban visual street furniture data obtained through urban big data collection combined with artificial intelligence image analysis, this paper also proposes a large number of urban living environment psychological assessment data collection strategies. These data are derived from crowdsourcing, and the collection method is limited by the development of cost and technology. At present, the psychological security preference of a large number of users on urban street images is collected by forced selection method, and then obtained by statistical data fitting to obtain urban environmental psychology. Security sense training set. In the future, when the conditions are mature, the brainwave feedback data in the virtual reality scene can be used to carry out the machine learning of psychological security, so as to improve the accuracy of the psychological security data.


Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


Author(s):  
Bhanu Chander

Artificial intelligence (AI) is defined as a machine that can do everything a human being can do and produce better results. Means AI enlightening that data can produce a solution for its own results. Inside the AI ellipsoidal, Machine learning (ML) has a wide variety of algorithms produce more accurate results. As a result of technology, improvement increasing amounts of data are available. But with ML and AI, it is very difficult to extract such high-level, abstract features from raw data, moreover hard to know what feature should be extracted. Finally, we now have deep learning; these algorithms are modeled based on how human brains process the data. Deep learning is a particular kind of machine learning that provides flexibility and great power, with its attempts to learn in multiple levels of representation with the operations of multiple layers. Deep learning brief overview, platforms, Models, Autoencoders, CNN, RNN, and Appliances are described appropriately. Deep learning will have many more successes in the near future because it requires very little engineering by hand.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


First Monday ◽  
2019 ◽  
Author(s):  
Niel Chah

Interest in deep learning, machine learning, and artificial intelligence from industry and the general public has reached a fever pitch recently. However, these terms are frequently misused, confused, and conflated. This paper serves as a non-technical guide for those interested in a high-level understanding of these increasingly influential notions by exploring briefly the historical context of deep learning, its public presence, and growing concerns over the limitations of these techniques. As a first step, artificial intelligence and machine learning are defined. Next, an overview of the historical background of deep learning reveals its wide scope and deep roots. A case study of a major deep learning implementation is presented in order to analyze public perceptions shaped by companies focused on technology. Finally, a review of deep learning limitations illustrates systemic vulnerabilities and a growing sense of concern over these systems.


2020 ◽  
Vol 18 (2) ◽  
Author(s):  
Nedeljko Šikanjić ◽  
Zoran Ž. Avramović ◽  
Esad Jakupović

In today’s world, devices with possibility to communicate, are emerging and growing daily. This advanced technology is bringing ideas of how to use these devices, in order to gain financial benefits for enterprises, business and economy in general. Purpose of research in this scientific paper is to discover, what are the trends in connecting these devices, called internet of things (IoT), what are financial aspects of implementing IoT solutions and how leaders in area of cloud computing and IoT, are implementing additional advanced technologies such as machine learning and artificial intelligence, to improve processes and gain increase in revenue, while bringing automation in place for the end users. Development of informational society is not only bringing innovation to everyday life, but is also providing effect on the economy. This effect reflects on various business platforms, companies and organizations while increasing the quality of the end product or service that is being provided.


Different mathematical models, Artificial Intelligence approach and Past recorded data set is combined to formulate Machine Learning. Machine Learning uses different learning algorithms for different types of data and has been classified into three types. The advantage of this learning is that it uses Artificial Neural Network and based on the error rates, it adjusts the weights to improve itself in further epochs. But, Machine Learning works well only when the features are defined accurately. Deciding which feature to select needs good domain knowledge which makes Machine Learning developer dependable. The lack of domain knowledge affects the performance. This dependency inspired the invention of Deep Learning. Deep Learning can detect features through self-training models and is able to give better results compared to using Artificial Intelligence or Machine Learning. It uses different functions like ReLU, Gradient Descend and Optimizers, which makes it the best thing available so far. To efficiently apply such optimizers, one should have the knowledge of mathematical computations and convolutions running behind the layers. It also uses different pooling layers to get the features. But these Modern Approaches need high level of computation which requires CPU and GPUs. In case, if, such high computational power, if hardware is not available then one can use Google Colaboratory framework. The Deep Learning Approach is proven to improve the skin cancer detection as demonstrated in this paper. The paper also aims to provide the circumstantial knowledge to the reader of various practices mentioned above.


Author(s):  
Bradley Camburn ◽  
Yuejun He ◽  
Sujithra Raviselvam ◽  
Jianxi Luo ◽  
Kristin Wood

Abstract Automation has enabled design of increasingly complex products, services, and systems. Advanced technology enables designers to automate repetitive tasks in earlier design phases, even high level conceptual ideation. One particularly repetitive task in ideation is to process the large concept sets that can be developed through crowdsourcing. This paper introduces a method for filtering, categorizing, and rating large sets of design concepts. It leverages unsupervised machine learning (ML) trained on open source databases. Input design concepts are written in natural language. The concepts are not pre-tagged, structured or processed in any way which requires human intervention. Nor does the approach require dedicated training on a sample set of designs. Concepts are assessed at the sentence level via a mixture of named entity tagging (keywords) through contextual sense recognition and topic tagging (sentence topic) through probabilistic mapping to a knowledge graph. The method also includes a filtering strategy, the introduction of two metrics, and a selection strategy for assessing design concepts. The metrics are analogous to the design creativity metrics novelty, level of detail, and a selection strategy. To test the method, four ideation cases were studied; over 4,000 concepts were generated and evaluated. Analyses include: asymptotic convergence analysis; a predictive industry case study; and a dominance test between several approaches to selection of high ranking concepts. Notably, in a series of binary comparisons between concepts that were selected from the entire set by a time limited human versus those with the highest ML metric scores, the ML selected concepts were dominant.


Author(s):  
Prakhar Mehrotra

The objective of this chapter is to discuss the integration of advancements made in the field of artificial intelligence into the existing business intelligence tools. Specifically, it discusses how the business intelligence tool can integrate time series analysis, supervised and unsupervised machine learning techniques and natural language processing in it and unlock deeper insights, make predictions, and execute strategic business action from within the tool itself. This chapter also provides a high-level overview of current state of the art AI techniques and provides examples in the realm of business intelligence. The eventual goal of this chapter is to leave readers thinking about what the future of business intelligence would look like and how enterprise can benefit by integrating AI in it.


Author(s):  
Benjamin M. Abdel-Karim ◽  
Nicolas Pfeuffer ◽  
Oliver Hinz

AbstractArtificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.


Sign in / Sign up

Export Citation Format

Share Document