A low-cost Raspberry PI-based vision system for upper-limb prosthetics

Author(s):  
Rinku Roy ◽  
Kianoush Nazarpour
Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1295 ◽  
Author(s):  
Julio Vega ◽  
José M. Cañas

Vision devices are currently one of the most widely used sensory elements in robots: commercial autonomous cars and vacuum cleaners, for example, have cameras. These vision devices can provide a great amount of information about robot surroundings. However, platforms for robotics education usually lack such devices, mainly because of the computing limitations of low cost processors. New educational platforms using Raspberry Pi are able to overcome this limitation while keeping costs low, but extracting information from the raw images is complex for children. This paper presents an open source vision system that simplifies the use of cameras in robotics education. It includes functions for the visual detection of complex objects and a visual memory that computes obstacle distances beyond the small field of view of regular cameras. The system was experimentally validated using the PiCam camera mounted on a pan unit on a Raspberry Pi-based robot. The performance and accuracy of the proposed vision system was studied and then used to solve two visual educational exercises: safe visual navigation with obstacle avoidance and person-following behavior.


Author(s):  
Tomás Serrano-Ramírez ◽  
Ninfa del Carmen Lozano-Rincón ◽  
Arturo Mandujano-Nava ◽  
Yosafat Jetsemaní Sámano-Flores

Computer vision systems are an essential part in industrial automation tasks such as: identification, selection, measurement, defect detection and quality control in parts and components. There are smart cameras used to perform tasks, however, their high acquisition and maintenance cost is restrictive. In this work, a novel low-cost artificial vision system is proposed for classifying objects in real time, using the Raspberry Pi 3B + embedded system, a Web camera and the Open CV artificial vision library. The suggested technique comprises the training of a supervised classification system of the Haar Cascade type, with image banks of the object to be recognized, subsequently generating a predictive model which is put to the test with real-time detection, as well as the calculation for the prediction error. This seeks to build a powerful vision system, affordable and also developed using free software.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Marcos J. Villaseñor-Aguilar ◽  
J. Enrique Botello-Álvarez ◽  
F. Javier Pérez-Pinal ◽  
Miroslava Cano-Lara ◽  
M. Fabiola León-Galván ◽  
...  

Artificial vision systems (AVS) have become very important in precision agriculture applied to produce high-quality and low-cost foods with high functional characteristics generated through environmental care practices. This article reported the design and implementation of a new fuzzy classification architecture based on the RGB color model with descriptors. Three inputs were used that are associated with the average value of the color components of four views of the tomato; the number of triangular membership functions associated with the components R and B were three and four for the case of component G. The amount of tomato samples used in training were forty and twenty for testing; the training was done using the Matlab© ANFISEDIT. The tomato samples were divided into six categories according to the US Department of Agriculture (USDA). This study focused on optimizing the descriptors of the color space to achieve high precision in the prediction results of the final classification task with an error of 536,995×10-6. The Computer Vision System (CVS) is integrated by an image isolation system with lighting; the image capture system uses a Raspberry Pi 3 and Camera Module Raspberry Pi 2 at a fixed distance and a black background. In the implementation of the CVS, three different color description methods for tomato classification were analyzed and their respective diffuse systems were also designed, two of them using the descriptors described in the literature.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2244 ◽  
Author(s):  
Diulhio de Oliveira ◽  
Marco Wehrmeister

The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as surveillance and search and rescue. Automatic detection of pedestrians in aerial images is a challenging task. The computing vision system must deal with many sources of variability in the aerial images captured with the UAV, e.g., low-resolution images of pedestrians, images captured at distinct angles due to the degrees of freedom that a UAV can move, the camera platform possibly experiencing some instability while the UAV flies, among others. In this work, we created and evaluated different implementations of Pattern Recognition Systems (PRS) aiming at the automatic detection of pedestrians in aerial images captured with multirotor UAV. The main goal is to assess the feasibility and suitability of distinct PRS implementations running on top of low-cost computing platforms, e.g., single-board computers such as the Raspberry Pi or regular laptops without a GPU. For that, we used four machine learning techniques in the feature extraction and classification steps, namely Haar cascade, LBP cascade, HOG + SVM and Convolutional Neural Networks (CNN). In order to improve the system performance (especially the processing time) and also to decrease the rate of false alarms, we applied the Saliency Map (SM) and Thermal Image Processing (TIP) within the segmentation and detection steps of the PRS. The classification results show the CNN to be the best technique with 99.7% accuracy, followed by HOG + SVM with 92.3%. In situations of partial occlusion, the CNN showed 71.1% sensitivity, which can be considered a good result in comparison with the current state-of-the-art, since part of the original image data is missing. As demonstrated in the experiments, by combining TIP with CNN, the PRS can process more than two frames per second (fps), whereas the PRS that combines TIP with HOG + SVM was able to process 100 fps. It is important to mention that our experiments show that a trade-off analysis must be performed during the design of a pedestrian detection PRS. The faster implementations lead to a decrease in the PRS accuracy. For instance, by using HOG + SVM with TIP, the PRS presented the best performance results, but the obtained accuracy was 35 percentage points lower than the CNN. The obtained results indicate that the best detection technique (i.e., the CNN) requires more computational resources to decrease the PRS computation time. Therefore, this work shows and discusses the pros/cons of each technique and trade-off situations, and hence, one can use such an analysis to improve and tailor the design of a PRS to detect pedestrians in aerial images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Supakorn Harnsoongnoen ◽  
Nuananong Jaroensuk

AbstractThe water displacement and flotation are two of the most accurate and rapid methods for grading and assessing freshness of agricultural products based on density determination. However, these techniques are still not suitable for use in agricultural inspections of products such as eggs that absorb water which can be considered intrusive or destructive and can affect the result of measurements. Here we present a novel proposal for a method of non-destructive, non-invasive, low cost, simple and real—time monitoring of the grading and freshness assessment of eggs based on density detection using machine vision and a weighing sensor. This is the first proposal that divides egg freshness into intervals through density measurements. The machine vision system was developed for the measurement of external physical characteristics (length and breadth) of eggs for evaluating their volume. The weighing system was developed for the measurement of the weight of the egg. Egg weight and volume were used to calculate density for grading and egg freshness assessment. The proposed system could measure the weight, volume and density with an accuracy of 99.88%, 98.26% and 99.02%, respectively. The results showed that the weight and freshness of eggs stored at room temperature decreased with storage time. The relationship between density and percentage of freshness was linear for the all sizes of eggs, the coefficient of determination (R2) of 0.9982, 0.9999, 0.9996, 0.9996 and 0.9994 for classified egg size classified 0, 1, 2, 3 and 4, respectively. This study shows that egg freshness can be determined through density without using water to test for water displacement or egg flotation which has future potential as a measuring system important for the poultry industry.


Processes ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 915
Author(s):  
Gözde Dursun ◽  
Muhammad Umer ◽  
Bernd Markert ◽  
Marcus Stoffel

(1) Background: Bioreactors mimic the natural environment of cells and tissues by providing a controlled micro-environment. However, their design is often expensive and complex. Herein, we have introduced the development of a low-cost compression bioreactor which enables the application of different mechanical stimulation regimes to in vitro tissue models and provides the information of applied stress and strain in real-time. (2) Methods: The compression bioreactor is designed using a mini-computer called Raspberry Pi, which is programmed to apply compressive deformation at various strains and frequencies, as well as to measure the force applied to the tissue constructs. Besides this, we have developed a mobile application connected to the bioreactor software to monitor, command, and control experiments via mobile devices. (3) Results: Cell viability results indicate that the newly designed compression bioreactor supports cell cultivation in a sterile environment without any contamination. The developed bioreactor software plots the experimental data of dynamic mechanical loading in a long-term manner, as well as stores them for further data processing. Following in vitro uniaxial compression conditioning of 3D in vitro cartilage models, chondrocyte cell migration was altered positively compared to static cultures. (4) Conclusion: The developed compression bioreactor can support the in vitro tissue model cultivation and monitor the experimental information with a low-cost controlling system and via mobile application. The highly customizable mold inside the cultivation chamber is a significant approach to solve the limited customization capability of the traditional bioreactors. Most importantly, the compression bioreactor prevents operator- and system-dependent variability between experiments by enabling a dynamic culture in a large volume for multiple numbers of in vitro tissue constructs.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Prosthesis ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 110-118
Author(s):  
Hannah Jones ◽  
Sigrid Dupan ◽  
Maxford Coutinho ◽  
Sarah Day ◽  
Deirdre Desmond ◽  
...  

People who either use an upper limb prosthesis and/or have used services provided by a prosthetic rehabilitation centre, hereafter called users, are yet to benefit from the fast-paced growth in academic knowledge within the field of upper limb prosthetics. Crucially over the past decade, research has acknowledged the limitations of conducting laboratory-based studies for clinical translation. This has led to an increase, albeit rather small, in trials that gather real-world user data. Multi-stakeholder collaboration is critical within such trials, especially between researchers, users, and clinicians, as well as policy makers, charity representatives, and industry specialists. This paper presents a co-creation model that enables researchers to collaborate with multiple stakeholders, including users, throughout the duration of a study. This approach can lead to a transition in defining the roles of stakeholders, such as users, from participants to co-researchers. This presents a scenario whereby the boundaries between research and participation become blurred and ethical considerations may become complex. However, the time and resources that are required to conduct co-creation within academia can lead to greater impact and benefit the people that the research aims to serve.


Sign in / Sign up

Export Citation Format

Share Document