scholarly journals A Low-Cost and Semi-Autonomous Robotic Scanning System for Characterising Radiological Waste

Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 119
Author(s):  
Stephen David Monk ◽  
Craig West ◽  
Manuel Bandala ◽  
Nile Dixon ◽  
Allahyar Montazeri ◽  
...  

A novel, semi-autonomous radiological scanning system for inspecting irregularly shaped and radiologically uncharacterised objects in various orientations is presented. The system utilises relatively low cost, commercial-off-the-shelf (COTS) electronic components, and is intended for use within relatively low to medium radioactive dose environments. To illustrate the generic concepts, the combination of a low-cost COTS vision system, a six DoF manipulator and a gamma radiation spectrometer are investigated. Three modes of vision have been developed, allowing a remote operator to choose the most appropriate algorithm for the task. The robot arm subsequently scans autonomously across the selected object, determines the scan positions and enables the generation of radiological spectra using the gamma spectrometer. These data inform the operator of any likely radioisotopes present, where in the object they are located and thus whether the object should be treated as LLW, ILW or HLW.

2015 ◽  
Vol 27 (2) ◽  
pp. 182-190
Author(s):  
Gou Koutaki ◽  
◽  
Keiichi Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/08.jpg"" width=""150"" />Developed shogi robot system</div> The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and robot systems, then discuss experiments conducted to verify the feasibility of the proposal, showing that even a low-cost system can be highly reliable. </span>


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2574 ◽  
Author(s):  
Jesus Monroy-Anieva ◽  
Cyril Rouviere ◽  
Eduardo Campos-Mercado ◽  
Tomas Salgado-Jimenez ◽  
Luis Garcia-Valdovinos

This work describes the modeling, control and development of a low cost Micro Autonomous Underwater Vehicle (μ-AUV), named AR2D2. The main objective of this work is to make the vehicle to detect and follow an object with defined color by means of the readings of a depth sensor and the information provided by an artificial vision system. A nonlinear PD (Proportional-Derivative) controller is implemented on the vehicle in order to stabilize the heave and surge movements. A formal stability proof of the closed-loop system using Lyapunov’s theory is given. Furthermore, the performance of the μ-AUV is validated through numerical simulations in MatLab and real-time experiments.


2018 ◽  
Author(s):  
Fernando Alvarez-Lopez ◽  
Marcelo Fabián Maina ◽  
Francesc Saigí-Rubió

BACKGROUND The increasingly pervasive presence of technology in the operating room (OR) raises the need to study the interaction between the surgeon and the computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices that enable non-contact gesture-based human-computer interaction (HCI) are currently being explored as a solution in surgical environments. OBJECTIVE The aim of this systematic review was to provide an account of the state-of-the-art of COTS devices in the detection of manual gestures in surgery, and to identify their use as a simulation tool for teaching motor skills in minimally invasive surgery (MIS). METHODS A systematic literature review was conducted in PubMed, Embase, ScienceDirect and IEEE for articles published between January 2000 and 2016 on the use of COTS devices for gesture detection in surgical environments, and in simulation for surgical skills learning in MIS RESULTS A total of 2709 studies were identified, 76 of which met the search selection criteria. The Microsoft KinectTM and the Leap Motion ControllerTM were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes; the possibility of using this technology to develop portable, low-cost simulators for skills learning in MIS was also examined. Given that the vast majority of articles found in this systematic review were proof-of-concept or prototype user and feasibility testing, we can conclude that this is a field that is still in the exploration phase in areas that require touchless manipulation in environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. CONCLUSIONS COTS devices applied to hand and instrument GBIs in the field of simulation for skills learning and training in MIS could open up a promising field to achieve the ubiquitous training and pre-surgical warm-up.


2009 ◽  
Vol 147-149 ◽  
pp. 243-250
Author(s):  
Shiuh Jer Huang ◽  
Chie Yi Wu

A stereo visual servo robotic system is developed on Nios II SOPC developing board with ALTERA FPGA chip to manipulate a retrofitted Mitsubishi RV-M2 robotic system. The 3-D position information between the target and stereo vision system can be extracted by low cost CMOS stereo vision algorithm first. Then, the relative motion between the robotic end-effector and the target can be planned to guide robot arm to catch the object. The fuzzy sliding mode control algorithm is employed to monitor the trajectory motion of each joint. The experimental results show that this visual servo robotic system can track and catch a moving target in 3D space and execute some interaction functions with player.


2015 ◽  
Vol 68 (4) ◽  
pp. 646-664 ◽  
Author(s):  
Xiaohong Zhang ◽  
Mingkui Wu ◽  
Wanke Liu

A prerequisite for a Global Positioning System (GPS) attitude determination is to calculate baselines between antennae with accuracy at the millimetre level simultaneously. However, in order to have a low cost attitude determination system, a set of Commercial-Off-The-Shelf (COTS) receivers with separate clocks are used. In this case, if the receiver clocks are not precisely synchronized, the baseline vector between antennae will be calculated from the GPS signals received at different times. This can be a significant error source for high-kinematic applications. In this paper, two equivalent and effective approaches are developed to compensate this significant bias for baseline estimation and attitude determination. Test results using real airborne GPS data demonstrate that the receiver time misalignment between the two receivers can result in a 5 cm baseline offset for an aircraft with a 50 m/s velocity; the corresponding attitude errors can reach about 0·50° in yaw and 0·10° in pitch respectively for the attitude determination system with a baseline length of 3·79 m. With the proposed methods, these errors can be effectively eliminated.


Author(s):  
W. Wu ◽  
C. Chen ◽  
J. Li ◽  
Y. Cong ◽  
B. Yang

Abstract. Accurate registration of sparse sequential point clouds data frames acquired by a 3D light detection and ranging (LiDAR) sensor like VLP-16 is a prerequisite for the back-end optimization of general LiDAR SLAM algorithms to achieve a globally consistent map. This process is also called LiDAR odometry. Aiming to achieve lower drift and robust LiDAR odometry in less structured outdoor scene using a low-cost wheeled robot-borne laser scanning system, a segment-based sampling strategy for LiDAR odometry is proposed in this paper. Proposed method was tested in two typical less structured outdoor scenes and compared with other two state of the art methods. The results reveal that the proposed method achieves lower drift and significantly outperform the state of the art.


2019 ◽  
Vol 11 (02) ◽  
pp. 1950019 ◽  
Author(s):  
Lin Gan ◽  
He Zhang ◽  
Cheng Zhou ◽  
Lin Liu

Rotating scanning motor is the important component of synchronous scanning laser fuze. High emission overload environment in the conventional ammunition has a serious impact on the reliability of the motor. Based on the theory that the buffer pad can attenuate the impact stress wave, a new motor buffering Isolation Method is proposed. The dynamical model of the new buffering isolation structure is established by ANSYS infinite element analysis software to do the nonlinear impact dynamics simulation of rotating scanning motor. The effectiveness of Buffering Isolation using different materials is comparatively analyzed. Finally, the Macht hammer impact experiment is done, the results show that in the experience of the 70,000[Formula: see text]g impact acceleration, the new buffering Isolation method can reduce the impact load about 15 times, which can effectively alleviate the plastic deformation of rotational scanning motor and improve the reliability of synchronization scanning system. A new method and theoretical basis of anti-high overload research for Laser Fuze is presented.


Minerals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 791
Author(s):  
Sufei Zhang ◽  
Ying Guo

This paper introduces computer vision systems (CVSs), which provides a new method to measure gem colour, and compares CVS and colourimeter (CM) measurements of jadeite-jade colour in the CIELAB space. The feasibility of using CVS for jadeite-jade colour measurement was verified by an expert group test and a reasonable regression model in an experiment involving 111 samples covering almost all jadeite-jade colours. In the expert group test, more than 93.33% of CVS images are considered to have high similarities with real objects. Comparing L*, a*, b*, C*, h, and ∆E* (greater than 10) from CVS and CM tests indicate that significant visual differences exist between the measured colours. For a*, b*, and h, the R2 of the regression model for CVS and CM was 90.2% or more. CVS readings can be used to predict the colour value measured by CM, which means that CVS technology can become a practical tool to detect the colour of jadeite-jade.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Supakorn Harnsoongnoen ◽  
Nuananong Jaroensuk

AbstractThe water displacement and flotation are two of the most accurate and rapid methods for grading and assessing freshness of agricultural products based on density determination. However, these techniques are still not suitable for use in agricultural inspections of products such as eggs that absorb water which can be considered intrusive or destructive and can affect the result of measurements. Here we present a novel proposal for a method of non-destructive, non-invasive, low cost, simple and real—time monitoring of the grading and freshness assessment of eggs based on density detection using machine vision and a weighing sensor. This is the first proposal that divides egg freshness into intervals through density measurements. The machine vision system was developed for the measurement of external physical characteristics (length and breadth) of eggs for evaluating their volume. The weighing system was developed for the measurement of the weight of the egg. Egg weight and volume were used to calculate density for grading and egg freshness assessment. The proposed system could measure the weight, volume and density with an accuracy of 99.88%, 98.26% and 99.02%, respectively. The results showed that the weight and freshness of eggs stored at room temperature decreased with storage time. The relationship between density and percentage of freshness was linear for the all sizes of eggs, the coefficient of determination (R2) of 0.9982, 0.9999, 0.9996, 0.9996 and 0.9994 for classified egg size classified 0, 1, 2, 3 and 4, respectively. This study shows that egg freshness can be determined through density without using water to test for water displacement or egg flotation which has future potential as a measuring system important for the poultry industry.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Sign in / Sign up

Export Citation Format

Share Document