scholarly journals Criminal Detection through Face Recognition

Author(s):  
Snehal Chobhe

Abstract: Face acknowledge is a champion among the most troublesome subjects in PC vision today. It has applications running from security and perception to delight destinations. Face affirmation writing computer programs are significant in banks, plane terminals, and various associations for screening customers.Germany and Australia have passed on face affirmation at edges and customs for Automatic Passport Control. Human face is a unique dissent having significant degree of change in its appearance which makes stand up to affirmation an irksome issue in PC vision. In this field, exactness and speed of ID is a guidelineissue. Various challenges exist for defy affirmation. The force of the system can be hindered by individuals who change their facial features through wearing shaded contact central focuses, growing a mustache, putting on genuine make-up, etc. Moral concerns are also related to the way toward recording, mulling over, and seeingcountenances. Various individuals dont support of perceptionstructures which take different photographs of people who have not endorsed this action. The goal of this paper is to surveydefy disclosure and affirmation methodology and offer an allout response for picture based face area and affirmation with higher precision, better response rate and a basic development for video perception. Game plan is proposed considering performed tests on various face rich data sets similar to subjects, position, sentiments and light. Index Terms: Face acknowledge, Security and perception, Automatic Passport Control,Malware detection, facial features, face rich data sets.

2018 ◽  
Author(s):  
Farah Al-khalidi

This project presents a new approach for automatically tracking the human face as well as facial features (nose, mouth, eyes)in a clear way. This technique became required in various future visual communication applications, such as teleconferencing, Facial recognition systems, Biometrics and Human computer interface etc. The principle behind detecting the face feature is used to measure the respiration rate in the future as the nose represents the important region in the human face for breathing. Human face detection as elliptical area was investigated then image processing techniques were used to extract human face as elliptical area from the rest of image. Several techniques were applied to detect the nose inside the elliptical area as rectangle region and then the mouth and eyes regions were extracted inside the elliptical face area. A skin-color segmentation with image processing techniques played an important role in detecting the human face as elliptic area and then several techniques were used such as enhancement, thresholding, Morphological, edge detections as well as binarization techniques to achieve the aims of the suggested methods. Nose detection as a rectangle region was also investigated by looking for the longest vertical line in the elliptical area. The nose was detected and extracted as rectangle region. Detecting the mouth was achieved by looking for the longest horizontal line under the tip of the nose then thresholding this region to detect the lips of the mouth; by extracting the points of the lips corners we extracted the mouth as elliptical region. Finally, the eye regions were tracked in the upper part of ellipse above the tip of the nose and detected as rectangular regions. Further work is in progress to enhance these techniques to take place in real time images as well as apply them in the medical field.


Author(s):  
CHIN-CHEN CHANG ◽  
YUAN-HUI YU

This paper proposes an efficient approach for human face detection and exact facial features location in a head-and-shoulder image. This method searches for the eye pair candidate as a base line by using the characteristic of the high intensity contrast between the iris and the sclera. To discover other facial features, the algorithm uses geometric knowledge of the human face based on the obtained eye pair candidate. The human face is finally verified with these unclosed facial features. Due to the merits of applying the Prune-and-Search and simple filtering techniques, we have shown that the proposed method indeed achieves very promising performance of face detection and facial feature location.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 213
Author(s):  
Sheela Rani ◽  
Vuyyuru Tejaswi ◽  
Bonthu Rohitha ◽  
Bhimavarapu Akhil

Recognition of face has been turned out to be the most important and interesting area in research. A face recognition framework is a PC application that is apt for recognizing or confirming the presence of human face from a computerized picture, from the video frames etc. One of the approaches to do this is by matching the chosen facial features with the pictures in the database. It is normally utilized as a part of security frameworks and can be implemented in different biometrics, for example, unique finger impression or eye iris acknowledgment frameworks. A picture is a mix of edges. The curved line potions where the brightness of the image change intensely are known as edges. We utilize a similar idea in the field of face-detection, the force of facial colours are utilized as a consistent value. Face recognition includes examination of a picture with a database of stored faces keeping in mind the end goal to recognize the individual in the given input picture. The entire procedure covers in three phases face detection, feature extraction and recognition and different strategies are required according to the specified requirements.


2012 ◽  
Vol 7 (1) ◽  
pp. 174-197 ◽  
Author(s):  
Heather Small ◽  
Kristine Kasianovitz ◽  
Ronald Blanford ◽  
Ina Celaya

Social networking sites and other social media have enabled new forms of collaborative communication and participation for users, and created additional value as rich data sets for research. Research based on accessing, mining, and analyzing social media data has risen steadily over the last several years and is increasingly multidisciplinary; researchers from the social sciences, humanities, computer science and other domains have used social media data as the basis of their studies. The broad use of this form of data has implications for how curators address preservation, access and reuse for an audience with divergent disciplinary norms related to privacy, ownership, authenticity and reliability.In this paper, we explore how the characteristics of the Twitter platform, coupled with an ambiguous and evolving understanding of privacy in networked communication, and divergent disciplinary understandings of the resulting data, combine to create complex issues for curators trying to ensure broad-based and ethical reuse of Twitter data. We provide a case study of a specific data set to illustrate how data curators can engage with the topics and questions raised in the paper. While some initial suggestions are offered to librarians and other information professionals who are beginning to receive social media data from researchers, our larger goal is to stimulate discussion and prompt additional research on the curation and preservation of social media data.


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Claire M. Gillan ◽  
Robb B. Rutledge

Improvements in understanding the neurobiological basis of mental illness have unfortunately not translated into major advances in treatment. At this point, it is clear that psychiatric disorders are exceedingly complex and that, in order to account for and leverage this complexity, we need to collect longitudinal datasets from much larger and more diverse samples than is practical using traditional methods. We discuss how smartphone-based research methods have the potential to dramatically advance our understanding of the neuroscience of mental health. This, we expect, will take the form of complementing lab-based hard neuroscience research with dense sampling of cognitive tests, clinical questionnaires, passive data from smartphone sensors, and experience-sampling data as people go about their daily lives. Theory- and data-driven approaches can help make sense of these rich data sets, and the combination of computational tools and the big data that smartphones make possible has great potential value for researchers wishing to understand how aspects of brain function give rise to, or emerge from, states of mental health and illness. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Author(s):  
Luciano Serafini ◽  
Artur d’Avila Garcez ◽  
Samy Badreddine ◽  
Ivan Donadello ◽  
Michael Spranger ◽  
...  

The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.


Galaxies ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 9 ◽  
Author(s):  
Jean-Philippe Lenain

Blazars are jetted active galactic nuclei with a jet pointing close to the line of sight, hence enhancing their intrinsic luminosity and variability. Monitoring these sources is essential in order to catch them flaring and promptly organize follow-up multi-wavelength observations, which are key to providing rich data sets used to derive e.g., the emission mechanisms at work, and the size and location of the flaring zone. In this context, the Fermi-LAT has proven to be an invaluable instrument, whose data are used to trigger many follow-up observations at high and very high energies. A few examples are illustrated here, as well as a description of different data products and pipelines, with a focus given on FLaapLUC, a tool in use within the H.E.S.S. collaboration.


Bioanalysis ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 87-98 ◽  
Author(s):  
Anhye Kim ◽  
Stephen R Dueker ◽  
Feng Dong ◽  
Ad F Roffel ◽  
Sang-won Lee ◽  
...  

Aim: Human 14C radiotracer studies provide information-rich data sets that enable informed decision making in clinical drug development. These studies are supported by liquid scintillation counting after conventional-sized 14C doses (50–200 μCi) or complex accelerator mass spectrometry (AMS) after microtracer-sized doses (∼0.1–1 μCi). Mid-infrared laser-based ‘cavity ring-down spectroscopy’ (CRDS) is an emerging platform for the sensitive quantitation of 14C tracers. Results & methodology: We compared the total 14C concentrations in plasma and urine samples from a microtracer study using both CRDS and AMS technology. The data were evaluated using statistical and pharmacokinetic modeling. Conclusion: The CRDS method closely reproduced the AMS method for total 14C concentrations. With optimization of the automated sample interface and further testing, it promises to be an accessible, robust system for pivotal microtracer investigations


2019 ◽  
Vol 487 (3) ◽  
pp. 4037-4056 ◽  
Author(s):  
Luca Di Mascolo ◽  
Eugene Churazov ◽  
Tony Mroczkowski

ABSTRACT We report the joint analysis of single-dish and interferometric observations of the Sunyaev–Zeldovich (SZ) effect from the galaxy cluster RX J1347.5−1145. We have developed a parametric fitting procedure that uses native imaging and visibility data, and tested it using the rich data sets from ALMA, Bolocam, and Planck available for this object. RX J1347.5−1145 is a very hot and luminous cluster showing signatures of a merger. Previous X-ray-motivated SZ studies have highlighted the presence of an excess SZ signal south-east of the X-ray peak, which was generally interpreted as a strong shock-induced pressure perturbation. Our model, when centred at the X-ray peak, confirms this. However, the presence of two almost equally bright giant elliptical galaxies separated by ∼100 kpc makes the choice of the cluster centre ambiguous, and allows for considerable freedom in modelling the structure of the galaxy cluster. For instance, we have shown that the SZ signal can be well described by a single smooth ellipsoidal generalized Navarro–Frenk–White profile, where the best-fitting centroid is located between the two brightest cluster galaxies. This leads to a considerably weaker excess SZ signal from the south-eastern substructure. Further, the most prominent features seen in the X-ray can be explained as predominantly isobaric structures, alleviating the need for highly supersonic velocities, although overpressurized regions associated with the moving subhaloes are still present in our model.


Sign in / Sign up

Export Citation Format

Share Document