computer vision technology
Recently Published Documents


TOTAL DOCUMENTS

241
(FIVE YEARS 146)

H-INDEX

14
(FIVE YEARS 3)

2022 ◽  
Vol 355 ◽  
pp. 03016
Author(s):  
Rongyong Zhao ◽  
Yan Wang ◽  
Chuanfeng Han ◽  
Ping Jia ◽  
Cuiling Li ◽  
...  

In recent years, with the rapid development of computer vision technology, image-based human body research has become an important task, such as pedestrian target detection, trajectory tracking, posture estimation and behaviour recognition. The centre of mass is one of the important characteristics that can reflect the phenomenon of pedestrian movement. This paper first introduces the biped robot model in robotics, starting from forward and inverse kinematics, to find the mapping relationship between the position of each joint and the pose of the end effector. Then, corresponding to the skeleton model of the human joint points, the characteristics of the bone posture and joint angle are determined. The moment of inertia factor is introduced, and the motion superposition of different joint points is considered to establish a pedestrian motion centroid model. By calculating the equivalent dynamic centroid, the pedestrian kinematics law can be explored and the pedestrian movement mechanism can be more deeply recognized.


2021 ◽  
Vol 14 (4) ◽  
pp. 1-27
Author(s):  
Giorgio Presti ◽  
Dragan Ahmetovic ◽  
Mattia Ducci ◽  
Cristian Bernareggi ◽  
Luca A. Ludovico ◽  
...  

Obstacle avoidance is a major challenge during independent mobility for blind or visually impaired (BVI) people. Typically, BVI people can only perceive obstacles at a short distance (about 1 m, in case they are using the white cane), and some obstacles are hard to detect (e.g . , those elevated from the ground), or should not be hit by the white cane (e.g . , a standing person). A solution to these problems can be found in recent computer-vision techniques that can run on mobile and wearable devices to detect obstacles at a distance. However, in addition to detecting obstacles, it is also necessary to convey information about them in real time. This contribution presents WatchOut , a sonification technique for conveying real-time information about the main properties of an obstacle to a BVI person, who can then use this additional feedback to safely navigate in the environment. WatchOut was designed with a user-centered approach, involving four iterations of online listening tests with BVI participants in order to define, improve and evaluate the sonification technique, eventually obtaining an almost perfect recognition accuracy. WatchOut was also implemented and tested as a module of a mobile app that detects obstacles using state-of-the-art computer vision technology. Results show that the system is considered usable and can guide the users to avoid more than 85% of the obstacles.


Pathobiology ◽  
2021 ◽  
pp. 1-9
Author(s):  
Emad A. Rakha ◽  
Konstantinos Vougas ◽  
Puay Hoon Tan

Digital technology has been used in the field of diagnostic breast pathology and immunohistochemistry (IHC) for decades. Examples include automated tissue processing and staining, digital data processing, storing and management, voice recognition systems, and digital technology-based production of antibodies and other IHC reagents. However, the recent application of whole slide imaging technology and artificial intelligence (AI)-based tools has attracted a lot of attention. The use of AI tools in breast pathology is discussed briefly as it is covered in other reviews. Here, we present the main application of digital technology in IHC. This includes automation of IHC staining, using image analysis systems and computer vision technology to interpret IHC staining, and the use of AI-based tools to predict marker expression from haematoxylin and eosin-stained digitalized images.


2021 ◽  
Vol 17 (4) ◽  
Author(s):  
Gunawan Dewantoro ◽  
Jamil Mansuri ◽  
Fransiscus Dalu Setiaji

The line follower robot is a mobile robot which can navigate and traverse to another place by following a trajectory which is generally in the form of black or white lines. This robot can also assist human in carrying out transportation and industrial automation. However, this robot also has several challenges with regard to the calibration issue, incompatibility on wavy surfaces, and also the light sensor placement due to the line width variation. Robot vision utilizes image processing and computer vision technology for recognizing objects and controlling the robot motion. This study discusses the implementation of vision based line follower robot using a camera as the only sensor used to capture objects. A comparison of robot performance employing different CPU controllers, namely Raspberry Pi and Jetson Nano, is made. The image processing uses an edge detection method which detect the border to discriminate two image areas and mark different parts. This method aims to enable the robot to control its motion based on the object captured by the webcam. The results show that the accuracies of the robot employing the Raspberry Pi and Jetson Nano are 96% and 98%, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Naichun Gao

Embedded networking has a broad prospect. Because of the Internet and the rapid development of PC skills, computer vision technology has a wide range of applications in many fields, especially the importance of identifying wrong movements in sports training. To study the computer vision technology to identify the wrong movement of athletes in sports training, in this paper, a hidden Markov model based on computer vision technology is constructed to collect video and identify the landing and take-off movements and badminton serving movements of a team of athletes under the condition of sports training, Bayesian classification algorithm to analyze the acquired sports training action data, obtain the error frequency, and the number of errors of the landing jump action, and the three characteristic data of the displacement, velocity, and acceleration of the body’s center of gravity of the athlete in the two cases of successful and incorrect badminton serve actions and compared and analyzed the accuracy of the action recognition method used in this article, the action recognition method based on deep learning and the action recognition method based on EMG signal under 30 experiments. The training process of deep learning is specifically split into two stages: 1st, a monolayer neuron is built layer by layer so that the network is trained one layer at a time; when all layers are fully trained, a tuning is performed using a wake-sleep operation. The final result shows that the frequency of the wrong actions of the athletes on the landing jump is concentrated in the knee valgus, the total frequency of error has reached 58%, and the frequency of personal error has reached 45%; the problem of the landing distance of the two feet of the team athletes also appeared more frequently, the total frequency reached 50%, and the personal frequency reached 30%. Therefore, athletes should pay more attention to the problems of knee valgus and the distance between feet when performing landing jumps; the difference in the displacement, speed, and acceleration of the body’s center of gravity during the badminton serve will affect the error of the action. And the action recognition method used in this study has certain advantages compared with the other two action recognition methods, and the accuracy of action recognition is higher.


2021 ◽  
Vol 11 (24) ◽  
pp. 11917
Author(s):  
Wei Liu ◽  
Yun Ma ◽  
Mingqiang Gao ◽  
Shuaidong Duan ◽  
Longsheng Wei

In a connected vehicle environment based on vehicle-to-vehicle (V2V) technology, images from front and ego vehicles are fused to augment a driver’s or autonomous system’s visual field, which is helpful in avoiding road accidents by eliminating the blind point (the objects occluded by vehicles), especially tailgating in urban areas. Realizing multi-view image fusion is a tough problem without knowing the relative location of two sensors and the fusing object is occluded in some views. Therefore, we propose an image geometric projection model and a new fusion method between neighbor vehicles in a cooperative way. Based on a 3D inter-vehicle projection model, selected feature matching points are adopted to estimate the geometric transformation parameters. By adding deep information, our method also designs a new deep-affine transformation to realize fusing of inter-vehicle images. Experimental results on KIITI (Karlsruhe Institute of Technology and Toyota Technological Institute) datasets are shown to validate our algorithm. Compared with previous work, our method improves the IoU index by 2~3 times. This algorithm can effectively enhance the visual perception ability of intelligent vehicles, and it will help to promote the further development and improvement of computer vision technology in the field of cooperative perception.


2021 ◽  
Author(s):  
Xiao-Chun Zhao ◽  
Cheng Li ◽  
Jun-Xing Zheng ◽  
Xin-Min Gao ◽  
Ya-Gang Zhang ◽  
...  

Author(s):  
V. G. Zhukhovitsky ◽  
S. O. Navolnev ◽  
N. V. Shevlyagina

Using an original computer program, a quantitative characteristic of the structural features of the cultures of two reference strains of Helicobacter pylori, identified by transmission electron microscopy, was performed. The results obtained made it possible to establish morphological, ultrastructural and brightness differences between individual bacterial cells of the studied strains. The proposed program, compiled in accordance with the requirements of computer vision technology, makes it possible to detect differences in the structure of bacterial cells that are not detected by visual assessment, and also opens up the possibility of studying the phenotypic heterogeneity of isogenic populations of Helicobacter pylori and its pathogenic significance.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jianxun Deng

With the continuous advancement of smart agriculture, the introduction of robots for intelligent harvesting in modern agriculture is one of the crucial methods for the picking of fruits, vegetables, and melons. In this paper, three different illuminations, including front lighting, normal lighting, and back lighting, are first applied to citrus based on the computer vision technology. Secondly, the image data of the fruits, fruit stems, and leaves of the citrus are collected. The color component distributions of citrus based on different color models are analyzed according to the corresponding characteristic values, and an exploratory data analysis process for the image data of citrus is established. In addition, 300 citrus images are selected, and the citrus fruits are segmented from the background through the simulation experiment. The results of the study indicate that the recognition rate for the maturity of citrus has exceeded 98%, which has proved the effectiveness of the method proposed in this paper.


2021 ◽  
Vol 11 (23) ◽  
pp. 11321
Author(s):  
Dejiang Wang ◽  
Jianji Cheng ◽  
Honghao Cai

Based on the features of cracks, this research proposes the concept of a crack key point as a method for crack characterization and establishes a model of image crack detection based on the reference anchor points method, named KP-CraNet. Based on ResNet, the last three feature layers are repurposed for the specific task of crack key point feature extraction, named a feature filtration network. The accuracy of the model recognition is controllable and can meet both the pixel-level requirements and the efficiency needs of engineering. In order to verify the rationality and applicability of the image crack detection model in this study, we propose a distribution map of distance. The results for factors of a classical evaluation such as accuracy, recall rate, F1 score, and the distribution map of distance show that the method established in this research can improve crack detection quality and has a strong generalization ability. Our model provides a new method of crack detection based on computer vision technology.


Sign in / Sign up

Export Citation Format

Share Document