scholarly journals Vision based Office Automation and Security System using Machine Learning and Internet of Things

2018 ◽  
Vol 7 (2.24) ◽  
pp. 42
Author(s):  
Amber Goel ◽  
Apaar Khurana ◽  
Pranav Sehgal ◽  
K Suganthi

The paper focuses on two areas, automation and security. Raspberry Pi is the heart of the project and it is fuelled by Machine Learning Algorithms using Open CV and Internet of Things. Face recognition uses Linear Binary Pattern and if an unknown person uses their workstation, a message will be sent to the respective person with the photo of the person who uses the workstation. Face recognition is also being used for uploading attendance and switching ON and OFF appliances automatically. During un-official hours, A Human Detection algorithm is being used to detect the human presence. If an unknown person enters the office, a photo of the person will be taken and sent to the authorities. This technology is a combination of Computer Vision, Machine learning and Internet of things, that serves to be an efficient tool for both automation and security.  

Author(s):  
Abhay Patil

Abstract: Animal intervention is significant intimidation to the potency of the crops, which influences food security and decreases the value to the farmers. This suggested model displays the growth of the Internet of Things and Machine learning technique-based resolutions to surmount this obstacle. Raspberry Pi commands the machine algorithm, which is interfaced with the ESP8266 Wireless Fidelity module, Pi-Camera, Speaker/Buzzer, and LED. Machine learning algorithms similar to Regionbased Convolutional Neural Network and Single Shot Detection technology represents an essential function to identify the target in the pictures and classify the creatures. The experimentation exhibits that the Single Shot Detection algorithm exceeds than Region-based Convolutional Neural Network algorithm. Ultimately, the Twilio API interfaced software decimates the data to the farmers to take conclusive work in their farm territory. Keywords: Region-Based Convolutional Neural Network (R-CNN), Tensor Flow, Raspberry Pi, Internet of Things (IoT), Single Shot Detector (SSD)


Author(s):  
Amit Kumar Tyagi ◽  
Poonam Chahal

With the recent development in technologies and integration of millions of internet of things devices, a lot of data is being generated every day (known as Big Data). This is required to improve the growth of several organizations or in applications like e-healthcare, etc. Also, we are entering into an era of smart world, where robotics is going to take place in most of the applications (to solve the world's problems). Implementing robotics in applications like medical, automobile, etc. is an aim/goal of computer vision. Computer vision (CV) is fulfilled by several components like artificial intelligence (AI), machine learning (ML), and deep learning (DL). Here, machine learning and deep learning techniques/algorithms are used to analyze Big Data. Today's various organizations like Google, Facebook, etc. are using ML techniques to search particular data or recommend any post. Hence, the requirement of a computer vision is fulfilled through these three terms: AI, ML, and DL.


2018 ◽  
Vol 19 (2) ◽  
pp. 213-220
Author(s):  
NIK NUR WAHIDAH NIK HASHIM ◽  
TAREK MOHAMED BOLAD ◽  
Noor Hazrin Hany Mohamad Hanif

ABSTRACT: Recognizing colors is a concerning problem for the visually impaired person. The aim of this paper is to convert colors to sound and vibration in order to allow fully/partially blind people to have a ‘feeling’ or better understanding of the different colors around them. The idea is to develop a device that can produce vibration for colors. The user can also hear the name of the color along with ‘feeling’ the vibration. Two algorithms were used to distinguish between colors;  RGB to HSV color conversion in comparison with neural network and decision tree based machine learning algorithms. Raspberry Pi 3 with Open Source Computer Vision (OpenCV) software handles the image processing. The results for RGB to HSV color conversion algorithm were performed with 3 different colors (red, blue, and green). In addition, neural network and decision tree algorithms were trained and tested with eight colors (red, green, blue, orange, yellow, purple, white, and black) for the conversion to sound and vibration. Neural network and decision tree algorithms achieved higher accuracy and efficiency for the majority of tested colors as compared to the RGB to HSV. ABSTRAK: Membezakan antara warna adalah masalah yang merunsingkan terutamanya kepada mereka yang buta, separa buta atau buta warna. Tujuan kertas penyelidikan ini adalah untuk membentangkan kaedah menukar warna kepada bunyi dan getaran bagi membolehkan individu yang buta, separa buta atau buta warna untuk mendapat ‘perasaan’ atau pemahaman yang lebih baik tentang warna-warna yang berbeza disekeliling mereka. Idea yang dicadangkan adalah dengan membuat sebuah alat yang dapat menghasilkan getaran bagi setiap warna yang berbeza. Disamping itu, pengguna juga dapat mendengar nama warna tersebut. Algoritma yang digunakan untuk membezakan antara warna adalah penukaran warna RGB kepada HSV yang dibandingkan dengan rangkaian neural dan algoritma pembelajaran mesin berasaskan pokok keputusan. Raspberry Pi 3 bersaiz kad kredit dengan perisian Open Source Computer Vision (OpenCV) mengendalikan pemprosesan imej. Hasil algoritma penukaran warna RGB kepada HSV telah dilakukan dengan tiga warna yang berbeza (merah, biru, dan hijau). Tambahan pula, hasil rangkaian neural dan algoritma berasaskan pokok keputusan telah dilakukan dengan lapan warna (merah, hijau, biru, oren, kuning, ungu, putih, dan hitam) dengan penukaran warna tersebut kepada bunyi dan getaran. Selain itu, hasil rangkaian neural dan algoritma berasaskan pokok keputusan mencapai hasil dapatan yang baik dengan ketepatan dan kecekapan yang tinggi bagi kebanyakan warna yang diuji berbanding RGB kepada HSV.


2018 ◽  
Vol 2 (3) ◽  
pp. 26 ◽  
Author(s):  
Mahmut Yazici ◽  
Shadi Basurra ◽  
Mohamed Gaber

Machine learning has traditionally been solely performed on servers and high-performance machines. However, advances in chip technology have given us miniature libraries that fit in our pockets and mobile processors have vastly increased in capability narrowing the vast gap between the simple processors embedded in such things and their more complex cousins in personal computers. Thus, with the current advancement in these devices, in terms of processing power, energy storage and memory capacity, the opportunity has arisen to extract great value in having on-device machine learning for Internet of Things (IoT) devices. Implementing machine learning inference on edge devices has huge potential and is still in its early stages. However, it is already more powerful than most realise. In this paper, a step forward has been taken to understand the feasibility of running machine learning algorithms, both training and inference, on a Raspberry Pi, an embedded version of the Android operating system designed for IoT device development. Three different algorithms: Random Forests, Support Vector Machine (SVM) and Multi-Layer Perceptron, respectively, have been tested using ten diverse data sets on the Raspberry Pi to profile their performance in terms of speed (training and inference), accuracy, and power consumption. As a result of the conducted tests, the SVM algorithm proved to be slightly faster in inference and more efficient in power consumption, but the Random Forest algorithm exhibited the highest accuracy. In addition to the performance results, we will discuss their usability scenarios and the idea of implementing more complex and taxing algorithms such as Deep Learning on these small devices in more details.


Telecom IT ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 50-55
Author(s):  
D. Saharov ◽  
D. Kozlov

The article deals with the СoAP Protocol that regulates the transmission and reception of information traf-fic by terminal devices in IoT networks. The article describes a model for detecting abnormal traffic in 5G/IoT networks using machine learning algorithms, as well as the main methods for solving this prob-lem. The relevance of the article is due to the wide spread of the Internet of things and the upcoming update of mobile networks to the 5g generation.


2021 ◽  
pp. 307-327
Author(s):  
Mohammed H. Alsharif ◽  
Anabi Hilary Kelechi ◽  
Imran Khan ◽  
Mahmoud A. Albreem ◽  
Abu Jahid ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 613
Author(s):  
David Safadinho ◽  
João Ramos ◽  
Roberto Ribeiro ◽  
Vítor Filipe ◽  
João Barroso ◽  
...  

The capability of drones to perform autonomous missions has led retail companies to use them for deliveries, saving time and human resources. In these services, the delivery depends on the Global Positioning System (GPS) to define an approximate landing point. However, the landscape can interfere with the satellite signal (e.g., tall buildings), reducing the accuracy of this approach. Changes in the environment can also invalidate the security of a previously defined landing site (e.g., irregular terrain, swimming pool). Therefore, the main goal of this work is to improve the process of goods delivery using drones, focusing on the detection of the potential receiver. We developed a solution that has been improved along its iterative assessment composed of five test scenarios. The built prototype complements the GPS through Computer Vision (CV) algorithms, based on Convolutional Neural Networks (CNN), running in a Raspberry Pi 3 with a Pi NoIR Camera (i.e., No InfraRed—without infrared filter). The experiments were performed with the models Single Shot Detector (SSD) MobileNet-V2, and SSDLite-MobileNet-V2. The best results were obtained in the afternoon, with the SSDLite architecture, for distances and heights between 2.5–10 m, with recalls from 59%–76%. The results confirm that a low computing power and cost-effective system can perform aerial human detection, estimating the landing position without an additional visual marker.


Sign in / Sign up

Export Citation Format

Share Document