Development of Human Pose Recognition System by Using Raspberry Pi and PoseNet Model

Author(s):  
Kosei Yamao ◽  
Ryosuke Kubota
2020 ◽  
Vol 67 (1) ◽  
pp. 133-141
Author(s):  
Dmitriy O. Khort ◽  
Aleksei I. Kutyrev ◽  
Igor G. Smirnov ◽  
Rostislav A. Filippov ◽  
Roman V. Vershinin

Technological capabilities of agricultural units cannot be optimally used without extensive automation of production processes and the use of advanced computer control systems. (Research purpose) To develop an algorithm for recognizing the coordinates of the location and ripeness of garden strawberries in different lighting conditions and describe the technological process of its harvesting in field conditions using a robotic actuator mounted on a self-propelled platform. (Materials and methods) The authors have developed a self-propelled platform with an automatic actuator for harvesting garden strawberry, which includes an actuator with six degrees of freedom, a co-axial gripper, mg966r servos, a PCA9685 controller, a Logitech HD C270 computer vision camera, a single-board Raspberry Pi 3 Model B+ computer, VL53L0X laser sensors, a SZBK07 300W voltage regulator, a Hubsan X4 Pro H109S Li-polymer battery. (Results and discussion) Using the Python programming language 3.7.2, the authors have developed a control algorithm for the automatic actuator, including operations to determine the X and Y coordinates of berries, their degree of maturity, as well as to calculate the distance to berries. It has been found that the effectiveness of detecting berries, their area and boundaries with a camera and the OpenCV library at the illumination of 300 Lux reaches 94.6 percent’s. With an increase in the robotic platform speed to 1.5 kilometre per hour and at the illumination of 300 Lux, the average area of the recognized berries decreased by 9 percent’s to 95.1 square centimeter, at the illumination of 200 Lux, the area of recognized berries decreased by 17.8 percent’s to 88 square centimeter, and at the illumination of 100 Lux, the area of recognized berries decreased by 36.4 percent’s to 76 square centimeter as compared to the real area of berries. (Conclusions) The authors have provided rationale for the technological process and developed an algorithm for harvesting garden strawberry using a robotic actuator mounted on a self-propelled platform. It has been proved that lighting conditions have a significant impact on the determination of the area, boundaries and ripeness of berries using a computer vision camera.


2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


2020 ◽  
Vol 9 (1) ◽  
pp. 1022-1027

Driving a vehicle or a car has become tedious job nowadays due to heavy traffic so focus on driving is utmost important. This makes a scope for automation in Automobiles in minimizing human intervention in controlling the dashboard functions such as Headlamps, Indicators, Power window, Wiper System, and to make it possible this is a small effort from this paper to make driving distraction free using Voice controlled dashboard. and system proposed in this paper works on speech commands from the user (Driver or Passenger). As Speech Recognition system acts Human machine Interface (HMI) in this system hence this system makes use of Speaker recognition and Speech recognition for recognizing the command and recognize whether the command is coming from authenticated user(Driver or Passenger). System performs Feature Extraction and extracts speech features such Mel Frequency Cepstral Coefficients(MFCC),Power Spectral Density(PSD),Pitch, Spectrogram. Then further for Feature matching system uses Vector Quantization Linde Buzo Gray(VQLBG) algorithm. This algorithm makes use of Euclidean distance for calculating the distance between test feature and codebook feature. Then based on speech command recognized controller (Raspberry Pi-3b) activates the device driver for motor, Solenoid valve depending on function. This system is mainly aimed to work in low noise environment as most speech recognition systems suffer when noise is introduced. When it comes to speech recognition acoustics of the room matters a lot as recognition rate differs depending on acoustics. when several testing and simulation trials were taken for testing, system has speech recognition rate of 76.13%. This system encourages Automation of vehicle dashboard and hence making driving Distraction Free.


2021 ◽  
pp. 393-405
Author(s):  
Kaiyan Zhou ◽  
Yanqing Wang ◽  
Yongquan Li

Author(s):  
Sheikh Md. Razibul Hasan Raj ◽  
Sultana Jahan Mukta ◽  
Tapan Kumar Godder ◽  
Md. Zahidul Islam

2018 ◽  
Vol 7 (3.15) ◽  
pp. 174 ◽  
Author(s):  
Yuslinda Wati Mohamad Yusof ◽  
Muhammad Asyraf Mohd Nasir ◽  
Kama Azura Othman ◽  
Saiful Izwan Suliman ◽  
Shahrani Shahbudin ◽  
...  

This project focuses on face recognition implementation in creating fully automated attendance system with a cloud. Cloud services will provide a useful information regarding the attendance such as attendance summary performance and visualizing the data into graph and chart. In this study, we aim to create an online student attendance database, interfaced with a face recognition system based on raspberry pi 3 model B. A graphical user interface (GUI) will provide ease of use for data analysis on the attendance system. This work used open computer vision library and python for face recognition system combined with SFTP to establish connection to an internet server which runs on PHP and Node.js. The results showed that by interfacing a face recognition system with a server, a real-time attendance system can be built and be monitored remotely.  


Sign in / Sign up

Export Citation Format

Share Document