Computer Vision and IoT based Automated Hydroponic Farms in Urban Areas- A Soilless Cultivation

As the populace increments and characteristic assets decline, the capacity to serve humankind with an adequate measure of nourishment turns out to be progressively troublesome. The measure of rural land diminishes relatively to the expanding populace, along these lines the measure of nourishment delivered will diminish fundamentally, and will be lacking to serve the developing populace. The universal strategies for cultivating won't do the trick sooner. Thus, using modern technology and resources, a method of efficient farming must be introduced and employed in the agricultural field. This report introduces a method of efficient farming using hydroponics. The system is automated and uses sensor data to make decisions to benefit the crops being grown. The system runs on Raspberry PI and Arduino, and utilizes OpenCV. With our system we hope to solve the potential food crisis and give everyone access to fresh produce all year round.

Author(s):  
G. Subhashini ◽  
Anas Aiman Albanna ◽  
Raed Abdullah

<span>One of the most important features in a car is its braking system and engine. The braking system enables the driver to control the speed of the vehicle when the need arises in order to protect the car, driver and other road users from accidents which might be fatal. The performance of the entire car also relies largely on the effective delivery and operation of the car engine whose ability to deliver the required performance is hinged on its temperature. In recent years a variety of IOT based monitoring and control systems have been explored in various areas of modern technology. This Final Year research project proposes the design and development of an IOT based vehicle brake failure and engine overheating system. The proposed system utilizes a network of sensors to monitor the temperature of the car engine, obstacles along the path of the car and the speed of the vehicle. The sensor data retrieved from the monitoring system is used by the control system consisting of a microcontroller to make decisive automatic decisions for the vehicle brake and failure system. A warning system consisting of LCD, Buzzer and LED has also been added into the system to warn the driver regarding the operation of the braking and engine overheating system. Two microcontrollers have been utilized for this research i.e. Arduino Uno for sensor data acquisition and processing and a Raspberry Pi microcontroller for purposes of sending the data wirelessly to a web platform. The web platform developed enables the user to remotely access real-time and past data from the system vehicle brake failure and engine overheating system. A variety of tests were conducted on the system to evaluate its performance whereby 95.4% accuracy was achieved in in terms of the ability of the car to effectively and automatically brake in the presence of obstacles and in terms of speed control. Testing done on the ability of the system to accurately monitor the engine temperature shows that its able to achieve 97.5% accuracy. The IOT system is able to transmit the sensor data retrieved from the system using both WIFI and mobile data whereby an average transmission time of 2.32 s and 4.33 s was recorded for each system respectively.</span>


2018 ◽  
Vol 1 (2) ◽  
pp. 17-23
Author(s):  
Takialddin Al Smadi

This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality.   © 2018 JASET, International Scholars and Researchers Association


2020 ◽  
Vol 67 (1) ◽  
pp. 133-141
Author(s):  
Dmitriy O. Khort ◽  
Aleksei I. Kutyrev ◽  
Igor G. Smirnov ◽  
Rostislav A. Filippov ◽  
Roman V. Vershinin

Technological capabilities of agricultural units cannot be optimally used without extensive automation of production processes and the use of advanced computer control systems. (Research purpose) To develop an algorithm for recognizing the coordinates of the location and ripeness of garden strawberries in different lighting conditions and describe the technological process of its harvesting in field conditions using a robotic actuator mounted on a self-propelled platform. (Materials and methods) The authors have developed a self-propelled platform with an automatic actuator for harvesting garden strawberry, which includes an actuator with six degrees of freedom, a co-axial gripper, mg966r servos, a PCA9685 controller, a Logitech HD C270 computer vision camera, a single-board Raspberry Pi 3 Model B+ computer, VL53L0X laser sensors, a SZBK07 300W voltage regulator, a Hubsan X4 Pro H109S Li-polymer battery. (Results and discussion) Using the Python programming language 3.7.2, the authors have developed a control algorithm for the automatic actuator, including operations to determine the X and Y coordinates of berries, their degree of maturity, as well as to calculate the distance to berries. It has been found that the effectiveness of detecting berries, their area and boundaries with a camera and the OpenCV library at the illumination of 300 Lux reaches 94.6 percent’s. With an increase in the robotic platform speed to 1.5 kilometre per hour and at the illumination of 300 Lux, the average area of the recognized berries decreased by 9 percent’s to 95.1 square centimeter, at the illumination of 200 Lux, the area of recognized berries decreased by 17.8 percent’s to 88 square centimeter, and at the illumination of 100 Lux, the area of recognized berries decreased by 36.4 percent’s to 76 square centimeter as compared to the real area of berries. (Conclusions) The authors have provided rationale for the technological process and developed an algorithm for harvesting garden strawberry using a robotic actuator mounted on a self-propelled platform. It has been proved that lighting conditions have a significant impact on the determination of the area, boundaries and ripeness of berries using a computer vision camera.


1972 ◽  
Vol 26 (2) ◽  
pp. 469-478 ◽  
Author(s):  
David A. Kay ◽  
Eugene B. Skolnikoff

In the industrialized northern hemisphere we are assaulted daily with evidence of the deteriorating quality of the human environment: Rivers are closed to fishing because of dangerous levels of contamination; the safety of important foods is challenged; the foul air that major urban areas have been forced to endure is now spreading like an inkblot into surrounding areas. Lack of early concern about the implications for the environment of the widespread application of modern technology has allowed the problem to grow rapidly into a critical domestic and international issue.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 683
Author(s):  
José L. Escalona ◽  
Pedro Urda ◽  
Sergio Muñoz

This paper describes the kinematics used for the calculation of track geometric irregularities of a new Track Geometry Measuring System (TGMS) to be installed in railway vehicles. The TGMS includes a computer for data acquisition and process, a set of sensors including an inertial measuring unit (IMU, 3D gyroscope and 3D accelerometer), two video cameras and an encoder. The kinematic description, that is borrowed from the multibody dynamics analysis of railway vehicles used in computer simulation codes, is used to calculate the relative motion between the vehicle and the track, and also for the computer vision system and its calibration. The multibody framework is thus used to find the formulas that are needed to calculate the track irregularities (gauge, cross-level, alignment and vertical profile) as a function of sensor data. The TGMS has been experimentally tested in a 1:10 scaled vehicle and track specifically designed for this investigation. The geometric irregularities of a 90 m-scale track have been measured with an alternative and accurate method and the results are compared with the results of the TGMS. Results show a good agreement between both methods of calculation of the geometric irregularities.


2021 ◽  
Author(s):  
David Lopez Perez ◽  
Zuzanna Laudanska ◽  
Alicja Radkowska ◽  
Karolina Babis ◽  
Agata Koziol ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 613
Author(s):  
David Safadinho ◽  
João Ramos ◽  
Roberto Ribeiro ◽  
Vítor Filipe ◽  
João Barroso ◽  
...  

The capability of drones to perform autonomous missions has led retail companies to use them for deliveries, saving time and human resources. In these services, the delivery depends on the Global Positioning System (GPS) to define an approximate landing point. However, the landscape can interfere with the satellite signal (e.g., tall buildings), reducing the accuracy of this approach. Changes in the environment can also invalidate the security of a previously defined landing site (e.g., irregular terrain, swimming pool). Therefore, the main goal of this work is to improve the process of goods delivery using drones, focusing on the detection of the potential receiver. We developed a solution that has been improved along its iterative assessment composed of five test scenarios. The built prototype complements the GPS through Computer Vision (CV) algorithms, based on Convolutional Neural Networks (CNN), running in a Raspberry Pi 3 with a Pi NoIR Camera (i.e., No InfraRed—without infrared filter). The experiments were performed with the models Single Shot Detector (SSD) MobileNet-V2, and SSDLite-MobileNet-V2. The best results were obtained in the afternoon, with the SSDLite architecture, for distances and heights between 2.5–10 m, with recalls from 59%–76%. The results confirm that a low computing power and cost-effective system can perform aerial human detection, estimating the landing position without an additional visual marker.


Sign in / Sign up

Export Citation Format

Share Document