camera angle
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 36)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
pp. 5008-5023
Author(s):  
Rasool D. Haameid ◽  
Bushra Q. Al-Abudi ◽  
Raaid N. Hassan

This work explores the designing a system of an automated unmanned aerial vehicles (UAV( for objects detection, labelling, and localization using deep learning. This system takes pictures with a low-cost camera and uses a GPS unit to specify the positions. The data is sent to the base station via Wi-Fi connection. The proposed system consists of four main parts. First, the drone, which was assembled and installed, while a Raspberry Pi4 was added and the flight path was controlled. Second, various programs that were installed and downloaded to define the parts of the drone and its preparation for flight. In addition, this part included programs for both Raspberry Pi4 and servo, along with protocols for communication, video transmission, and sending and receiving signals between the drone and the computer. Third, a real-time, modified, one dimensional convolutional neural network (1D-CNN) algorithm, which was applied to detect and determine the type of the discovered objects (labelling). Fourth, GPS devices, which were used to determine the location of the drone starting and ending points . Trigonometric functions were then used for adjusting the camera angle and the drone altitude to calculate the direction of the detected object automatically. According to the performance evaluation conducted, the implemented system is capable of meeting the targeted requirements.


Author(s):  
Abd Gani S. F. ◽  
◽  
Miskon M. F ◽  
Hamzah R. A ◽  
Mohamood N ◽  
...  

Automatic Number Plate Recognition (ANPR) combines electronic hardware and complex computer vision software algorithms to recognize the characters on vehicle license plate numbers. Many researchers have proposed and implemented ANPR for various applications such as law enforcement and security, access control, border access, tracking stolen vehicles, tracking traffic violations, and parking management system. This paper discusses a live-video ANPR system using CNN developed on an Android smartphone embedded with a camera with limited resolution and limited processing power based on Malaysian license plate standards. In terms of system performance, in an ideal outdoor environment with good lighting and direct or slightly skewed camera angle, the recognition works perfectly with a computational time of 0.635 seconds. However, this performance is affected by poor lighting, extremely skewed angle of license plates, and fast vehicle movement.


2021 ◽  
Vol 13 (17) ◽  
pp. 3353
Author(s):  
Ignacio Zapico ◽  
Jonathan B. Laronne ◽  
Lázaro Sánchez Castillo ◽  
José F. Martín Duque

Conducting topographic surveys in active mines is challenging due ongoing operations and hazards, particularly in highwalls subject to constant and active mass movements (rock and earth falls, slides and flows). These vertical and long surfaces are the core of most mines, as the mineral feeding mining production originates there. They often lack easy and safe access paths. This framework highlights the importance of accomplishing non-contact high-accuracy and detailed topographies to detect instabilities prior to their occurrence. We have conducted drone flights in search of the best settings in terms of altitude mode and camera angle, to produce digital representation of topographies using Structure from Motion. Identification of discontinuities was evaluated, as they are a reliable indicator of potential failure areas. Natural shapes were used as control/check points and were surveyed using a robotic total station with a coaxial camera. The study was conducted in an active kaolin mine near the Alto Tajo Natural Park of East-Central Spain. Here the 140 m highwall is formed by layers of limestone, marls and sands. We demonstrate that for this vertical landscape, a facade drone flight mode combined with a nadir camera angle, and automatically programmed with a computer-based mission planning software, provides the most accurate and detailed topographies, in the shortest time and with increased flight safety. Contrary to previous reports, adding oblique images does not improve accuracy for this configuration. Moreover, neither extra sets of images nor an expert pilot are required. These topographies allowed the detection of 93.5% more discontinuities than the Above Mean Sea Level surveys, the common approach used in mining areas. Our findings improve the present SfM-UAV survey workflows in long highwalls. The versatile topographies are useful for the management and stabilization of highwalls during phases of operation, as well closure-reclamation.


2021 ◽  
pp. 019685992110408
Author(s):  
David Staton

In an effort to put more eyeballs on television sets, and in an attempt to reinvigorate a sport long beleaguered by doping scandals, recent questions surrounding female sponsorships, and a vanishing audience, the International Association of Athletic Federations unveiled a new camera designed by Seiko during the September 2019 World Championships held in Doha, Quatar. The idea was to add to an immersive experience, offering unparalleled views of sprinters at the moment they exploded from the starting blocks. Like many things during the Doha meet, the effort became an ending to a bad joke. Rather than getting to the heart of the event, the camera’s focus was a bit lower; the Seiko angle became known derisively as the crotch shot. After objections by two female German sprinters the positioning of the camera angle (specifically what would be shown when) was reconsidered, reframed, and essentially retired. Control of the body, including how it is observed, and the closely related idea of the control of one’s image are bound by certain ethical dimensions, particularly when that control is violated or profited from by outside parties. This paper interrogates how those concerns may be ameliorated by embracing an ethics of care.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4074
Author(s):  
Katie E. Doull ◽  
Carl Chalmers ◽  
Paul Fergus ◽  
Steve Longmore ◽  
Alex K. Piel ◽  
...  

Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused on investigating these factors, and this research builds upon this as well as exploring the efficacy of machine-learning for automated detection. In an experimental setting with voluntary test subjects, various factors were tested for their effect on detection probability: camera type (visible spectrum, RGB, and thermal infrared, TIR), time of day, camera angle, canopy density, and walking/stationary test subjects. The drone footage was analysed both manually by volunteers and through automated detection software. A generalised linear model with a logit link function was used to statistically analyse the data for both types of analysis. The findings concluded that using a TIR camera improved detection probability, particularly at dawn and with a 90° camera angle. An oblique angle was more effective during RGB flights, and walking/stationary test subjects did not influence detection with both cameras. Probability of detection decreased with increasing vegetation cover. Machine-learning software had a successful detection probability of 0.558, however, it produced nearly five times more false positives than manual analysis. Manual analysis, however, produced 2.5 times more false negatives than automated detection. Despite manual analysis producing more true positive detections than automated detection in this study, the automated software gives promising, successful results, and the advantages of automated methods over manual analysis make it a promising tool with the potential to be successfully incorporated into anti-poaching strategies.


2021 ◽  
Vol 2 ◽  
pp. 1-18
Author(s):  
Benjamin J. Lynn ◽  
Roxane Coche ◽  
Ashleigh Messick

In 2017, NFL viewers complained when NBC Sports used the “Madden” camera for live play-by-play coverage of two Thursday Night Football games. Their comments indicated that they had a difficult time estimating yardage from the new perspective. Those games were just two recent examples of viewers complaining about changes in the visual presentation of live sports broadcasts—a phenomenon that has been happening with the Madden camera for more than a decade. The sports broadcasters’ inability to adjust its production technique for live football coverage, despite repeated attempts, provides important insights about the nature of mass communication. As sports broadcasters continue to look for new production techniques in a constantly evolving media landscape, these findings could help guide their production practices. Using game footage from four NFL broadcasts, the present study tested for differences in yardage estimations made from the traditional game camera (i.e., a stationary camera perpendicular to the field) and the Madden camera (i.e., a moving camera on wires positioned over the field). Participants (N = 473) were randomly assigned to watch 11 plays from either the traditional game camera angle or the Madden camera angle. No significant differences were found in estimates of yardage gains based on camera angle. Thehigh variance in the findings suggests that distance estimations are complex visual processes that may require specialized training to improve accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3793
Author(s):  
Nan Wu ◽  
Kazuhiko Kawamoto

Large datasets are often used to improve the accuracy of action recognition. However, very large datasets are problematic as, for example, the annotation of large datasets is labor-intensive. This has encouraged research in zero-shot action recognition (ZSAR). Presently, most ZSAR methods recognize actions according to each video frame. These methods are affected by light, camera angle, and background, and most methods are unable to process time series data. The accuracy of the model is reduced owing to these reasons. In this paper, in order to solve these problems, we propose a three-stream graph convolutional network that processes both types of data. Our model has two parts. One part can process RGB data, which contains extensive useful information. The other part can process skeleton data, which is not affected by light and background. By combining these two outputs with a weighted sum, our model predicts the final results for ZSAR. Experiments conducted on three datasets demonstrate that our model has greater accuracy than a baseline model. Moreover, we also prove that our model can learn from human experience, which can make the model more accurate.


Author(s):  
Lucía Cores-Sarría ◽  
Brent J. Hale ◽  
Annie Lang

Abstract. This study tests the effects of camera distance and camera angle on emotional response across four categories of pictures covering a large emotional range (positive and negative miscellanea, erotica, and threat), using the International Affective Picture System (IAPS) –a large database of emotionally evocative photographs. We content analyzed 722 images for the content category and camera framing (distance and angle), employing these as independent factors in analyses, and used the IAPS’ pre-existing normative average ratings of emotional valence, arousal, and dominance as dependent variables. As hypothesized, affective responses were generally increased by closer framing and high and low angles (compared to straight angles), but the content of the picture played an important role in determining effect strength and direction. In particular, closeness increased arousal for all picture groups but had the opposite effect on positive miscellaneous pictures, straight angles decreased the emotional response for the two miscellanea groups, and low angles increased the emotional response for threatening pictures. This study is the first to show that previously found camera framing effects apply to pictures of high emotional intensity (e.g., erotica and threat). We suggest that future work should consider formal manipulations alongside message content.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Bobby Halim ◽  
Yosef Yulius

In an advertisement sometimes there is a meaning that is presented not straightforward. Every TVC that uses real human talent cannot be separated from the camera's perspective. THE IMPORTANCE OF CAMERA VIEWS ON VIDEO MEDIA ADVERTISING examines the relationship of camera placement (angle) in influencing the message conveyed by a TVC and how cinematography is used as visual rhetoric. The study was conducted qualitatively with a semiotic film analysis approach. Data THE IMPORTANCE OF CAMERA VIEWS ON VIDEO MEDIA ADS is grouped into 4 structures, namely Visual Structure, Verbal Structure (Language, Character, Settings, Time), Narrative Structure and Audio Structure. Diachronic analysis using signifier (sign) and signified (sign) views. Some types of angles are Extreme Long Shot (ELS), Very Long Shot (VLS), Long Shot (LS), Medium Long Shot (MLS), Medium Shot (MS), Medium Close Up (MCU), Close Up (CU), Big Close Up (BCU), Low Angle Shot, Eye Level Shot. The selection of the selected images to be analyzed is carried out on the following basis: (1) Selecting an image according to the type of camera angle. (2) Sort images according to the type of camera angle. (3) Analyze each film and the impression built from each camera's perspective. (4) Comparing the impressions of the two analyzes. (5) Explain the reasons for the different impressions that arise, when the viewpoint of the camera is the same but the impression obtained is different. The camera angle is important in creating certain impressions, for example the impression of horror.


2021 ◽  
Vol 21 (2) ◽  
pp. 149-157
Author(s):  
Youngseok Song ◽  
Hyeong Jun Lee ◽  
Byung Sik Kim ◽  
Moojong Park

In this study, the camera angle of a UAV (Unmanned Aerial Vehicle) was adjusted to take aerial photographs, and 2D and 3D models were constructed to evaluate the quantitative impact. The study area was Waryong Bridge, located in Dalseong-gun, Daegu Metropolitan City. The camera angles of the UAV were 90°, 75°, and 60°; DSM, orthophoto, and the 3D model were analyzed. As a result of the analysis of DSM and orthophoto, images were 1.05 times in 75° and 1.10 times in 60° compared to 90°, and matching was 1.09 times in 75° and 1.60 times in 60° compared to 90°. The point cloud for building 3D models increased analysis points to 1.17 times for 75° and 1.47 times for 60° compared to 90°. However, the area of an orthophoto of 1 pixel increased by 1.10 times for 75° and 1.34 times for 60° compared to 90°, resulting in a decrease in resolution. As the camera angle of the UAV decreases, the overlapping ratio of aerial photographs increases and the structure of wide areas can be implemented. However, since a large difference occurs in the precision of the 3D model, a shooting plan according to the purpose should be established.


Sign in / Sign up

Export Citation Format

Share Document