A Computer Vision Approach for Automatically Mining and Classifying End of Life Products and Components

Author(s):  
Matthew L. Dering ◽  
Conrad S. Tucker

The authors of this work present a computer vision approach that discovers and classifies objects in a video stream, towards an automated system for managing End of Life (EOL) waste streams. Currently, the sorting stage of EOL waste management is an extremely manual and tedious process that increases the costs of EOL options and minimizes its attractiveness as a profitable enterprise solution. There have been a wide range of EOL methodologies proposed in the engineering design community that focus on determining the optimal EOL strategies of reuse, recycle, remanufacturing and resynthesis. However, many of these methodologies assume a product/component disassembly cost based on human labor, which hereby increases the cost of EOL waste management. For example, recent EOL options such as resynthesis, rely heavily on the optimal sorting and combining of components in a novel way to form new products. This process however, requires considerable manual labor that may make this option less attractive, given products with highly complex interactions and components. To mitigate these challenges, the authors propose a computer vision system that takes live video streams of incoming EOL waste and i) automatically identifies and classifies products/components of interest and ii) predicts the EOL process that will be needed for a given product/component that is classified. A case study involving an EOL waste stream video is used to demonstrate the predictive accuracy of the proposed methodology in identifying and classifying EOL objects.

Author(s):  
I. G. Zubov

Introduction. Computer vision systems are finding widespread application in various life domains. Monocularcamera based systems can be used to solve a wide range of problems. The availability of digital cameras and large sets of annotated data, as well as the power of modern computing technologies, render monocular image analysis a dynamically developing direction in the field of machine vision. In order for any computer vision system to describe objects and predict their actions in the physical space of a scene, the image under analysis should be interpreted from the standpoint of the basic 3D scene. This can be achieved by analysing a rigid object as a set of mutually arranged parts, which represents a powerful framework for reasoning about physical interaction.Objective. Development of an automatic method for detecting interest points of an object in an image.Materials and methods. An automatic method for identifying interest points of vehicles, such as license plates, in an image is proposed. This method allows localization of interest points by analysing the inner layers of convolutional neural networks trained for the classification of images and detection of objects in an image. The proposed method allows identification of interest points without incurring additional costs of data annotation and training.Results. The conducted experiments confirmed the correctness of the proposed method in identifying interest points. Thus, the accuracy of identifying a point on a license plate achieved 97%.Conclusion. A new method for detecting interest points of an object by analysing the inner layers of convolutional neural networks is proposed. This method provides an accuracy similar to or exceeding that of other modern methods.


2018 ◽  
Vol 1 (2) ◽  
pp. 17-23
Author(s):  
Takialddin Al Smadi

This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality.   © 2018 JASET, International Scholars and Researchers Association


2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


Author(s):  
Branka Vulesevic ◽  
Naozumi Kubota ◽  
Ian G Burwash ◽  
Claire Cimadevilla ◽  
Sarah Tubiana ◽  
...  

Abstract Aims Severe aortic valve stenosis (AS) is defined by an aortic valve area (AVA) <1 cm2 or an AVA indexed to body surface area (BSA) <0.6 cm/m2, despite little evidence supporting the latter approach and important intrinsic limitations of BSA indexation. We hypothesized that AVA indexed to height (H) might be more applicable to a wide range of populations and body morphologies and might provide a better predictive accuracy. Methods and results In 1298 patients with degenerative AS and preserved ejection fraction from three different countries and continents (derivation cohort), we aimed to establish an AVA/H threshold that would be equivalent to 1.0 cm2 for defining severe AS. In a distinct prospective validation cohort of 395 patients, we compared the predictive accuracy of AVA/BSA and AVA/H. Correlations between AVA and AVA/BSA or AVA/H were excellent (all R2 > 0.79) but greater with AVA/H. Regressions lines were markedly different in obese and non-obese patients with AVA/BSA (P < 0.0001) but almost identical with AVA/H (P = 0.16). AVA/BSA values that corresponded to an AVA of 1.0 cm2 were markedly different in obese and non-obese patients (0.48 and 0.59 cm2/m2) but not with AVA/H (0.61 cm2/m for both). Agreement for the diagnosis of severe AS (AVA < 1 cm2) was significantly higher with AVA/H than with AVA/BSA (P < 0.05). Similar results were observed across the three countries. An AVA/H cut-off value of 0.6 cm2/m [HR = 8.2(5.6–12.1)] provided the best predictive value for the occurrence of AS-related events [absolute AVA of 1 cm2: HR = 7.3(5.0–10.7); AVA/BSA of 0.6 cm2/m2 HR = 6.7(4.4–10.0)]. Conclusion In a large multinational/multiracial cohort, AVA/H was better correlated with AVA than AVA/BSA and a cut-off value of 0.6 cm2/m provided a better diagnostic and prognostic value than 0.6 cm2/m2. Our results suggest that severe AS should be defined as an AVA < 1 cm2 or an AVA/H < 0.6 cm2/m rather than a BSA-indexed value of 0.6 cm2/m2.


Minerals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 791
Author(s):  
Sufei Zhang ◽  
Ying Guo

This paper introduces computer vision systems (CVSs), which provides a new method to measure gem colour, and compares CVS and colourimeter (CM) measurements of jadeite-jade colour in the CIELAB space. The feasibility of using CVS for jadeite-jade colour measurement was verified by an expert group test and a reasonable regression model in an experiment involving 111 samples covering almost all jadeite-jade colours. In the expert group test, more than 93.33% of CVS images are considered to have high similarities with real objects. Comparing L*, a*, b*, C*, h, and ∆E* (greater than 10) from CVS and CM tests indicate that significant visual differences exist between the measured colours. For a*, b*, and h, the R2 of the regression model for CVS and CM was 90.2% or more. CVS readings can be used to predict the colour value measured by CM, which means that CVS technology can become a practical tool to detect the colour of jadeite-jade.


2021 ◽  
pp. 105084
Author(s):  
Bojana Milovanovic ◽  
Ilija Djekic ◽  
Jelena Miocinovic ◽  
Bartosz G. Solowiej ◽  
Jose M. Lorenzo ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Metals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 387
Author(s):  
Martin Choux ◽  
Eduard Marti Bigorra ◽  
Ilya Tyapin

The rapidly growing deployment of Electric Vehicles (EV) put strong demands on the development of Lithium-Ion Batteries (LIBs) but also into its dismantling process, a necessary step for circular economy. The aim of this study is therefore to develop an autonomous task planner for the dismantling of EV Lithium-Ion Battery pack to a module level through the design and implementation of a computer vision system. This research contributes to moving closer towards fully automated EV battery robotic dismantling, an inevitable step for a sustainable world transition to an electric economy. For the proposed task planner the main functions consist in identifying LIB components and their locations, in creating a feasible dismantling plan, and lastly in moving the robot to the detected dismantling positions. Results show that the proposed method has measurement errors lower than 5 mm. In addition, the system is able to perform all the steps in the order and with a total average time of 34 s. The computer vision, robotics and battery disassembly have been successfully unified, resulting in a designed and tested task planner well suited for product with large variations and uncertainties.


Author(s):  
Kiran Tota-Maharaj ◽  
Alexander McMahon

AbstractWind power produces more electricity than any other form of renewable energy in the United Kingdom (UK) and plays a key role in decarbonisation of the grid. Although wind energy is seen as a sustainable alternative to fossil fuels, there are still several environmental impacts associated with all stages of the lifecycle of a wind farm. This study determined the material composition for wind turbines for various sizes and designs and the prevalence of such turbines over time, to accurately quantify waste generation following wind turbine decommissioning in the UK. The end of life stage is becoming increasingly important as a rapid rise in installation rates suggests an equally rapid rise in decommissioning rates can be expected as wind turbines reach the end of their 20–25-year operational lifetime. Waste data analytics were applied in this study for the UK in 5-year intervals, stemming from 2000 to 2039. Current practices for end of life waste management procedures have been analysed to create baseline scenarios. These scenarios have been used to explore potential waste management mitigation options for various materials and components such as reuse, remanufacture, recycling, and heat recovery from incineration. Six scenarios were then developed based on these waste management options, which have demonstrated the significant environmental benefits of such practices through quantification of waste reduction and greenhouse gas (GHG) emissions savings. For the 2015–2019 time period, over 35 kilotonnes of waste are expected to be generated annually. Overall waste is expected to increase over time to more than 1200 kilotonnes annually by 2039. Concrete is expected to account for the majority of waste associated with wind turbine decommissioning initially due to foundations for onshore turbines accounting for approximately 80% of their total weight. By 2035–2039, steel waste is expected to account for almost 50% of overall waste due to the emergence of offshore turbines, the foundations of which are predominantly made of steel.


Sign in / Sign up

Export Citation Format

Share Document