scholarly journals ETDR: An Exploratory View of Text Detection and Recognition in Images and Videos

2021 ◽  
Vol 35 (5) ◽  
pp. 383-393
Author(s):  
Chaitra Yuvaraj Lokkondra ◽  
Dinesh Ramegowda ◽  
Gopalakrishna Madigondanahalli Thimmaiah ◽  
Ajay Prakash Bassappa Vijaya ◽  
Manjula Hebbaka Shivananjappa

Images and videos with text content are a direct source of information. Today, there is a high need for image and video data that can be intelligently analyzed. A growing number of researchers are focusing on text identification, making it a hot issue in machine vision research. Since this opens the way, several real-time-based applications such as text detection, localization, and tracking have become more prevalent in text analysis systems. To find out more about how text information may be extracted, have a look at our survey. This study presents a trustworthy dataset for text identification in images and videos at first. The second part of the article details the numerous text formats, both in images and video. Third, the process flow for extracting information from the text and the existing machine learning and deep learning techniques used to train the model was described. Fourth, explain assessment measures that are used to validate the model. Finally, it integrates the uses and difficulties of text extraction across a wide range of fields. Difficulties focus on the most frequent challenges faced in the actual world, such as capturing techniques, lightning, and environmental conditions. Images and videos have evolved into valuable sources of data. The text inside the images and video provides a massive quantity of facts and statistics. However, such data is not easy to access. This exploratory view provides easier and more accurate mathematical modeling and evaluation techniques to retrieve the text in image and video into an accessible form.

1985 ◽  
Vol 1 (4) ◽  
pp. 400-416 ◽  
Author(s):  
Malcolm Page

A series of nearly two dozen ‘Theatre Checklists’ appeared as supplements to the old series of Theatre Quarterly, recording biographical, performance, and bibliographical information in accessible form about a wide range of mainly living playwrights. The format was subsequently extended by Simon Trussler for the ‘Writers on File’ series he now edits for Methuen London, of which the first six titles have recently appeared. However, it was felt that the former style of checklist would still provide a valuable source of information for writers not scheduled for early inclusion in the new series, expecially when this complemented other forms of documentation in the journal: and this first ‘NTQ Checklist’ on the work of John McGrath thus appears alongside Tony Mitchell's interview with the playwright. Its compiler, Malcolm Page, teaches in the English Department of Simon Fraser University, British Columbia. Besides contributing several of the earlier series of ‘Theatre Checklists’. Malcolm Page has published widely in the field of modern British drama: his study of John Arden appeared last year from Twayne, and his volume on Arden was among the first of the ‘Writers on File’ from Methuen.


2020 ◽  
pp. 1-10
Author(s):  
Bryce J. Dietrich

Abstract Although previous scholars have used image data to answer important political science questions, less attention has been paid to video-based measures. In this study, I use motion detection to understand the extent to which members of Congress (MCs) literally cross the aisle, but motion detection can be used to study a wide range of political phenomena, like protests, political speeches, campaign events, or oral arguments. I find not only are Democrats and Republicans less willing to literally cross the aisle, but this behavior is also predictive of future party voting, even when previous party voting is included as a control. However, this is one of the many ways motion detection can be used by social scientists. In this way, the present study is not the end, but the beginning of an important new line of research in which video data is more actively used in social science research.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


2021 ◽  
pp. 101-107
Author(s):  
Mohammad Alshehri ◽  

Presently, a precise localization and tracking process becomes significant to enable smartphone-assisted navigation to maximize accuracy in the real-time environment. Fingerprint-based localization is the commonly available model for accomplishing effective outcomes. With this motivation, this study focuses on designing efficient smartphone-assisted indoor localization and tracking models using the glowworm swarm optimization (ILT-GSO) algorithm. The ILT-GSO algorithm involves creating a GSO algorithm based on the light-emissive characteristics of glowworms to determine the location. In addition, the Kalman filter is applied to mitigate the estimation process and update the initial position of the glowworms. A wide range of experiments was carried out, and the results are investigated in terms of distinct evaluation metrics. The simulation outcome demonstrated considerable enhancement in the real-time environment and reduced the computational complexity. The ILT-GSO algorithm has resulted in an increased localization performance with minimal error over the recent techniques.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Majid Amirfakhrian ◽  
Mahboub Parhizkar

AbstractIn the next decade, machine vision technology will have an enormous impact on industrial works because of the latest technological advances in this field. These advances are so significant that the use of this technology is now essential. Machine vision is the process of using a wide range of technologies and methods in providing automated inspections in an industrial setting based on imaging, process control, and robot guidance. One of the applications of machine vision is to diagnose traffic accidents. Moreover, car vision is utilized for detecting the amount of damage to vehicles during traffic accidents. In this article, using image processing and machine learning techniques, a new method is presented to improve the accuracy of detecting damaged areas in traffic accidents. Evaluating the proposed method and comparing it with previous works showed that the proposed method is more accurate in identifying damaged areas and it has a shorter execution time.


Author(s):  
Neha Thomas ◽  
Susan Elias

 Abstract— Detection of fake review and reviewers is currently a challenging problem in cyber space. It is challenging primarily due to the dynamic nature of the methodology used to fake the review. There are several aspects to be considered when analyzing reviews to classify them effective into genuine and fake. Sentiment analysis, opinion mining and intend mining are fields of research that try to accomplish the goal through Natural Language Processing of the text content of the review.  In this paper, an approach that uses the review ratings evaluated along a timeline is presented. An Amazon dataset comprising of ratings indicated for a wide range of products was used for the analysis presented here. The analysis of the ratings was carried out for an electronic product over a period of six years.  The computed average rating helps to identify linear classifiers that define solution boundaries within the dataspace. This enables a product specific classification of review ratings and suitable recommendations can also be generated automatically. The paper explains a methodology to evaluate the average product ratings over time and presents the research outcomes using a novel classification tool. The proposed approach helps to determine the optimal point to distinguish between fake and genuine ratings for each product.    Index Terms: Fake reviews, Fake Ratings, Product Ratings, Online Shopping, Amazon Dataset.


2016 ◽  
Vol 15 (1) ◽  
pp. 63-80
Author(s):  
Jitrlada ROJRATANAVIJIT ◽  
Preecha VICHITTHAMAROS ◽  
Sukanya PHONGSUPHAP

The emergence of Twitter in Thailand has given millions of users a platform to express and share their opinions about products and services, among other subjects, and so Twitter is considered to be a rich source of information for companies to understand their customers by extracting and analyzing sentiment from Tweets. This offers companies a fast and effective way to monitor public opinions on their brands, products, services, etc. However, sentiment analysis performed on Thai Tweets has challenges brought about by language-related issues, such as the difference in writing systems between Thai and English, short-length messages, slang words, and word usage variation. This research paper focuses on Tweet classification and on solving data sparsity issues. We propose a mixed method of supervised learning techniques and lexicon-based techniques to filter Thai opinions and to then classify them into positive, negative, or neutral sentiments. The proposed method includes a number of pre-processing steps before the text is fed to the classifier. Experimental results showed that the proposed method overcame previous limitations from other studies and was very effective in most cases. The average accuracy was 84.80 %, with 82.42 % precision, 83.88 % recall, and 82.97 % F-measure.


2020 ◽  
Vol 9 (S1-Dec2020) ◽  
pp. 30-33
Author(s):  
Sindhu Thamban

The Jigsaw II, one form of Cooperative learning techniques is an efficient strategy to use in a language classroom. The basic activities include 1) Reading with team members 2) Expert group discussion 3) Team members report 4) Test 5) Team recognition. The jigsaw II strategy is easyto implement and works well with a wide range of students.Previous researches related to Jigsaw II shows that it is more powerful and effective and appropriate in situations where learning is from text based materials. Reviews related to the strategy shows that no researches have been carried out to develop the reading comprehension of the high school students, particularly in the Indian context. Hence through this paper an attempt has been made by the researcher to check the effectiveness of Jigsaw II in developing the reading comprehension of High school students.The study statistically revealed that there is significant difference in the reading comprehension achievement of the students who were taught by using the traditional method and to those taught by using the Jigsaw II strategy. In accordance with the qualitative and quantitative findings attained it was found that Jigsaw II was found to be more effective than the traditional teaching method in developing the reading comprehension of high school students.


2021 ◽  
Author(s):  
K. Emma Knowland ◽  
Christoph Keller ◽  
Krzysztof Wargan ◽  
Brad Weir ◽  
Pamela Wales ◽  
...  

<p>NASA's Global Modeling and Assimilation Office (GMAO) produces high-resolution global forecasts for weather, aerosols, and air quality. The NASA Global Earth Observing System (GEOS) model has been expanded to provide global near-real-time 5-day forecasts of atmospheric composition at unprecedented horizontal resolution of 0.25 degrees (~25 km). This composition forecast system (GEOS-CF) combines the operational GEOS weather forecasting model with the state-of-the-science GEOS-Chem chemistry module (version 12) to provide detailed analysis of a wide range of air pollutants such as ozone, carbon monoxide, nitrogen oxides, and fine particulate matter (PM2.5). Satellite observations are assimilated into the system for improved representation of weather and smoke. The assimilation system is being expanded to include chemically reactive trace gases. We discuss current capabilities of the GEOS Constituent Data Assimilation System (CoDAS) to improve atmospheric composition modeling and possible future directions, notably incorporating new observations (TROPOMI, geostationary satellites) and machine learning techniques. We show how machine learning techniques can be used to correct for sub-grid-scale variability, which further improves model estimates at a given observation site.</p>


Sign in / Sign up

Export Citation Format

Share Document