Indian Currency Recognition from Live Video Using Deep Learning

Author(s):  
Kushal Bhavsar ◽  
Keyurbhai Jani ◽  
Rakeshkumar Vanzara
Keyword(s):  
2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Hyeonho Song ◽  
Kunwoo Park ◽  
Meeyoung Cha

AbstractLive streaming services enable the audience to interact with one another and the streamer over live content. The surging popularity of live streaming platforms has created a competitive environment. To retain existing viewers and attract newcomers, streamers and fans often create a well-condensed summary of the streamed content. However, this process is manual and costly due to the length of online live streaming events. The current study identifies enjoyable moments in user-generated live video content by examining the audiences’ collective evaluation of its epicness. We characterize what features “epic” moments and present a deep learning model to extract them based on analyzing two million user-recommended clips and the associated chat conversations. The evaluation shows that our data-driven approach can identify epic moments from user-generated streamed content that cover various contexts (e.g., victory, funny, awkward, embarrassing). Our user study further demonstrates that the proposed automatic model performs comparably to expert suggestions. We discuss implications of the collective decision-driven extraction in identifying diverse epic moments in a scalable way.


In modern days, feeling exposure is a ground of curiosity and is used in fields such as cross-examining prisoners and teenagers observing human-computer relations. The anticipated work designates the exposure of mortal sentiments from an instantaneous video or stationary video with the help of a convolution neural network (CNN) and haar cascade algorithm. The foremost part of the announcement constitutes field appearance. The suggested work aims to categorize a given video or a live video into one of the emotions (natural, angry, happy, fearful, disgusted, sad, surprise). Our work also distinguishes multiple faces from live video and organize their emotions. Our recommended work also imprisonments the pictures from the video every second, hoard them into a file, and generates a video from those pictures along with their respective.


2017 ◽  
Vol 24 (6) ◽  
pp. 736-740 ◽  
Author(s):  
Maria Torres Vega ◽  
Decebal Constantin Mocanu ◽  
Jeroen Famaey ◽  
Stavros Stavrou ◽  
Antonio Liotta

Author(s):  
Stellan Ohlsson
Keyword(s):  

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
L Pennig ◽  
L Lourenco Caldeira ◽  
C Hoyer ◽  
L Görtz ◽  
R Shahzad ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document