scholarly journals Benthic wildlife underwater video recording during longline survey in Weddell Sea

2020 ◽  
pp. 75-83
Author(s):  
P. Zabroda ◽  
◽  
L. Pshenichnov ◽  
D. Marichev ◽  
◽  
...  

Non-extractive method for the benthic wildlife study using the underwater video system (UVS) recording was provided during the research survey with the bottom longline in the northwestern part of the Weddell Sea. At the longline survey stations the data on wind direction and speed, state of the sea, air temperature, cloudiness, ice concentration, atmospheric pressure, precipitation, depth and coordinates of the anchor setting, direction of the longline set also were collected. It was found that the UVS with additional light during video recording does not disturb the animal behavior at depths of 700–1100 m in the study area. Three UVS observations have been described. The slope of the northwestern part of the Weddell Sea can be considered as spawning site of squid (Slosarczykovia circumantarctica). The data indicate wide distribution of Antarctic krill (Euphausia superba) and Antarctic jonasfish (Notolepis coatsi) in the area. The high density of the adult Antarctic jonasfish in a single place has never been recorded before. Preliminary observations and analysis of video recordings showed that the shooting lighting andobservation distance are sufficient for observing and identifying animals, their behavior and movement. This technique will allow estimating the relative species abundance and size distribution.

1984 ◽  
Vol 58 (1) ◽  
pp. 23-30
Author(s):  
Donald S. Martin ◽  
Ming-Shiunn Huang

The actor/observer effect was examined by Storms in a 1973 study which manipulated perceptual orientation using video recordings. Storms' study was complex and some of his results equivocal. The present study attempted to recreate the perceptual reorientation effect using a simplified experimental design and an initial difference between actors and observers which was the reverse of the original effect. Female undergraduates performed a motor co-ordination task as actors while watched by observers. Each person made attributions for the actor's behaviour before and after watching a video recording of the performance. For a control group the video recording was of an unrelated variety show excerpt. Actors' initial attributions were less situational than observers'. Both actors and observers became more situational after the video replay but this effect occurred in both experimental and control groups. It was suggested the passage of time between first and second recording of attributions could account for the findings and care should be taken when interpreting Storms' (1973) study and others which did not adequately control for temporal effects.


2020 ◽  
pp. 1-8
Author(s):  
Raluca Tanasa

Throws and catches in rhythmic gymnastics represent one of the fundamental groups of apparatus actuation. They represent for the hoop actions of great showmanship, but also elements of risk. The purpose of this paper is to improve the throw execution technique through biomechanical analysis in order to increase the performance of female gymnasts in competitions. The subjects of this study were 8 gymnasts aged 9-10 years old, practiced performance Rhythmic Gymnastics. The experiment consisted in video recording and the biomechanical analysis of the element “Hoop throw, step jump and catch”. After processing the video recordings using the Simi Motion software, we have calculated and obtained values concerning: launch height, horizontal distance and throwing angle between the arm and the horizontal. Pursuant to the data obtained, we have designed a series of means to improve the execution technique for the elements comprised within the research and we have implemented them in the training process. Regarding the interpretation of the results, it may be highlighted as follows: height and horizontal distance in this element have values of the correlation coefficient of 0.438 and 0.323, thus a mean significance of 0.005. The values of the arm/horizontal angle have improved for all the gymnasts, the correlation coefficient being 0.931, with a significance of 0.01. As a general conclusion, after the results obtained, it may be stated that the means introduced in the experiment have proven their efficacy, which has led to the optimisation of the execution technique, thus confirming the research hypothesis.


Author(s):  
V.K. Fishchenko ◽  
P.S. Zimin ◽  
A.V. Zatserkovnyy ◽  
A.E. Subote ◽  
A.V. Golik ◽  
...  

В Тихоокеанском океанологическом институте (ТОИ) ДВО РАН с 2012 г. ведутся разработки и исследования возможностей технологий стационарного подводного видеонаблюдения. Развернуты три подводныхкомплекса: два в бухте Алексеева (о-в Попова) и один в бухте Витязь (зал. Посьета). К настоящему времени накоплены значительные объемы информации в виде моментальных снимков и видеозаписей подводныхсцен. Разработаны интерфейсы для предоставления этой информации пользователям по каналам сети Интернет. Разработаны технологии поддержки работы территориально разнесенных экспертов, составляющихбиологические описания видеоматериалов, подобных тем, которые разрабатываются в ведущих зарубежныхорганизациях по морской биологии. Разработаны и апробированы методики оценивания по видеоинформации параметров жизнедеятельности некоторых видов морских гидробионтов. Благодаря непрерывностинаблюдения зафиксировано нескольких редких случаев, представляющих интерес для морских биологов. Разработаны и апробированы методики оценивания гидрологических характеристик среды на основе анализавидеотрансляций с подводных камер. Эти результаты представляются важными в контексте сопровождениянаблюдений за жизнедеятельностью морской биоты данными о внешних условиях, в которых она происходит. Продемонстрирована возможность использования звукового канала камер для регистрации и анализаакустических шумов от морских судов. Продемонстрирована возможность применения подводных видеокомплексов для организации экспериментов по изучению реакции морских гидробионтов на воздействие целенаправленных физических сигналов.Since 2012, the Pacific Oceanological Institute of FarEastern Branch of the Russian Academy of Science has beendeveloping and studying the capabilities of technologies ofstationary underwater video surveillance. Three of the underwatercomplexes have been deployed in different waterareas: two in the Alekseev Bay (Popova Island) and one inVityaz Bay (Posyet Gulf). At this point, complexes have accumulateda significant amount of data in the form of snapshotsand video recordings of underwater scenes, which canbe accessed through designed Internet-based interfaces. Allthe surveillance systems contain technologies as a support ofthe work of geographically dispersed experts involved in thebiological description of video materials, similar to ones developedin leading worldwide marine biology organizations.Besides, the estimation of vital parameters of some marinelife species by the video recordings can be performed usingdeveloped and tested methods. Thanks to continuous observation,the designed systems have already recorded severalrare cases of interest for marine biologists. Hydrologicalcharacteristics of surrounding media can be studied usingdeveloped and tested methods of analysis of video streamingfrom underwater cameras. These results are especially crucialfor accompanying observations of the vital activity ofmarine organisms with data on external conditions in whichthey occur. Cameras built-in audio channels can be used forrecording and analyzing noises of marine vessels. Designedunderwater video complexes provide an opportunity forconducting experiments on studying the reaction of marineorganisms to dedicated physical signals.


Author(s):  
Robin Pla ◽  
Thibaut Ledanois ◽  
Escobar David Simbana ◽  
Anaël Aubry ◽  
Benjamin Tranchard ◽  
...  

The main aim of this study was to evaluate the validity and the reliability of a swimming sensor to assess swimming performance and spatial-temporal variables. Six international male open-water swimmers completed a protocol which consisted of two training sets: a 6×100m individual medley and a continuous 800 m set in freestyle. Swimmers were equipped with a wearable sensor, the TritonWear to collect automatically spatial-temporal variables: speed, lap time, stroke count (SC), stroke length (SL), stroke rate (SR), and stroke index (SI). Video recordings were added as a “gold-standard” and used to assess the validity and the reliability of the TritonWear sensor. The results show that the sensor provides accurate results in comparison with video recording measurements. A very high accuracy was observed for lap time with a mean absolute percentage error (MAPE) under 5% for each stroke (2.2, 3.2, 3.4, 4.1% for butterfly, backstroke, breaststroke and freestyle respectively) but high error ranges indicate a dependence on swimming technique. Stroke count accuracy was higher for symmetric strokes than for alternate strokes (MAPE: 0, 2.4, 7.1 & 4.9% for butterfly, breaststroke, backstroke & freestyle respectively). The other variables (SL, SR & SI) derived from the SC and the lap time also show good accuracy in all strokes. The wearable sensor provides an accurate real time feedback of spatial-temporal variables in six international open-water swimmers during classical training sets (at low to moderate intensities), which could be a useful tool for coaches, allowing them to monitor training load with no effort.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 702 ◽  
Author(s):  
Jin Hyun Cheong ◽  
Sawyer Brooks ◽  
Luke J. Chang

Advances in computer vision and machine learning algorithms have enabled researchers to extract facial expression data from face video recordings with greater ease and speed than standard manual coding methods, which has led to a dramatic increase in the pace of facial expression research. However, there are many limitations in recording facial expressions in laboratory settings.  Conventional video recording setups using webcams, tripod-mounted cameras, or pan-tilt-zoom cameras require making compromises between cost, reliability, and flexibility. As an alternative, we propose the use of a mobile head-mounted camera that can be easily constructed from our open-source instructions and blueprints at a fraction of the cost of conventional setups. The head-mounted camera framework is supported by the open source Python toolbox FaceSync, which provides an automated method for synchronizing videos. We provide four proof-of-concept studies demonstrating the benefits of this recording system in reliably measuring and analyzing facial expressions in diverse experimental setups, including group interaction experiments.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 497
Author(s):  
Sébastien Villon ◽  
Corina Iovan ◽  
Morgan Mangeas ◽  
Laurent Vigliola

With the availability of low-cost and efficient digital cameras, ecologists can now survey the world’s biodiversity through image sensors, especially in the previously rather inaccessible marine realm. However, the data rapidly accumulates, and ecologists face a data processing bottleneck. While computer vision has long been used as a tool to speed up image processing, it is only since the breakthrough of deep learning (DL) algorithms that the revolution in the automatic assessment of biodiversity by video recording can be considered. However, current applications of DL models to biodiversity monitoring do not consider some universal rules of biodiversity, especially rules on the distribution of species abundance, species rarity and ecosystem openness. Yet, these rules imply three issues for deep learning applications: the imbalance of long-tail datasets biases the training of DL models; scarce data greatly lessens the performances of DL models for classes with few data. Finally, the open-world issue implies that objects that are absent from the training dataset are incorrectly classified in the application dataset. Promising solutions to these issues are discussed, including data augmentation, data generation, cross-entropy modification, few-shot learning and open set recognition. At a time when biodiversity faces the immense challenges of climate change and the Anthropocene defaunation, stronger collaboration between computer scientists and ecologists is urgently needed to unlock the automatic monitoring of biodiversity.


Author(s):  
Jason Byrd ◽  
Todd Stafford ◽  
Royce Gildersleeve ◽  
Christal Ferrance ◽  
Kara Kiblinger

Learn how easy video recording can be with the new OneButton Studio, located in Gateway Library. Presenters will share how this studio drastically reduces the amount of knowledge necessary to make professional video recordings and discuss applications in courses, including flipping the classroom, adding media literacy components to course objectives, and incorporating video recordings into assignments.


2021 ◽  
Vol 1 (2) ◽  
pp. 32-40
Author(s):  
A.M. Prasanna Kumar ◽  
Bharathi Gururaj

Moving picture entertainment is a foremost source of amusement for populace in today’s existence. To entertain populace a lot of investment is put into film production by the film makers. Their endeavour is being ruined by few people by pirate the movies substance. They do it by capture the video recording in mobile phone camera and upload it to websites or put up for sale it to people which cause huge loss. In this research work we are proposea novel technique for reduction of film piracy by avoiding fake video recordings of video in theatres. An indistinguishable luminosity is projected from the display to the whole spectators that falls on the camera lens which is sensitive to infrared light rays Makes the recorded video unfit to watch. A method is developed for anti-piracy system for film industry using steganography technique in MATLAB.


Circulation ◽  
2014 ◽  
Vol 130 (suppl_2) ◽  
Author(s):  
Yi-Chia Lee ◽  
Huang-Fu Yeh ◽  
Yen-Pin Chen ◽  
Chun-Yi Chang ◽  
Wei-Ting Chen ◽  
...  

Objectives: Accelerometer (Q-CPR) has been developed and promoted to monitor the quality of cardiopulmonary resuscitation (CPR). Although the device registers the occurrence of no-flow intervals, it does not provide comprehensive information on the causes leading to these no-flow intervals. This study is aimed to analyze causes leading to CPR interruptions registered by Q-CPR by reviewing corresponding video recordings of the resuscitation sessions. Methods: Accelerometer recordings (Q-CPR, Philips) of 20 CPR episodes from December 2010 to April 2014 in a tertiary university ED were obtained. Frequency, timing, duration, and types of no-flow intervals, defined as no-flow duration >= 1.5 seconds, were reviewed. Video recordings of the corresponding CPR sessions were reviewed. Causes leading no flow intervals registered by Q-CPR were categorized and analyzed. Results: The duration of CPR reviewed for the cases averaged 8.59 minutes (range 2.23 - 19.04 minutes). No-flow intervals (pauses >= 1.5 seconds) occurred 122 times (averaged one interruption every 1.27 minutes of CPR) with an average no-flow intervals of 6.45 seconds (range 1.54 - 51.50 seconds). Through detail review of the video-recordings corresponding to the no-flow intervals registered by Q-CPR, the leading causes of no-flow intervals are associated with pulse checks for pulseless electric activity- PEA (19.5%), pre-shock pauses (13.9%), ultrasound exam (11.6%) and intubation (9.6%), as displayed in the following chart. Conclusion: Video recording and time-motion analysis provide detailed information on the causes leading to no-flow intervals registered by QCPR, and could complement information acquired by Q-CPR. Measures should be taken to address leading causes of CPR interruption, especially pulse checks for PEA and pre-shock pauses, to promote quality of CPR.


CJEM ◽  
2016 ◽  
Vol 18 (S1) ◽  
pp. S110-S110
Author(s):  
B. Nolan ◽  
A. Ackery ◽  
B. Au

Introduction: Smartphones are everywhere. Recent technological advances allow for instantaneous high quality video and audio recordings with the touch of a button. In Canada, physician smartphone use is highly regulated by provincial legislature and multiple policies have been published from provincial physician colleges and the Canadian Medical Protective Association (CMPA). Patients on the other hand have no such laws to observe. We set out to look at what legislation and policies exist to provide guidance to physicians in two potential scenarios: when a patient requests to record a patient-physician interaction and if a patient surreptitiously records a patient-physician interaction without consent of the physician. Methods: A literature review searching for articles on patient video recordings and patient smartphone use was completed on both Medline and PubMed. Further review of each provincial privacy act and communication with each provincial privacy office was performed. Consultation with each provincial physician college and the CMPA was also done to identify any policies or recommendations to guide physicians. Results: Patients making video recordings do not fall under any provincial privacy law and there are no existing policies from any provincial physician college or the CMPA to provide guidance. Therefore, physicians must rely on their own institution’s policy regarding patient video recording in the health care setting. Be familiar with your institution’s policy. If your institution does not have a policy, create one with the input of appropriate stakeholders. Patients may surreptitiously video record medical interactions without physician consent. Although this may not be permitted under an individual institution’s policy, it is not illegal under the Criminal Code. Thus, it is important to behave in a professional manner at all times and assume you may be recorded at any time. Conclusion: The majority of patients’ recordings will be done without litigious intentions, but rather with the goal of understanding more about their own health and medical care. Unfortunately there are those who will undermine the physician-patient relationship. Physicians cannot allow this to cause distrust in future relationships, nor should it force physicians to practice more defensive medicine. Physicians must continue to practice the art of medicine and accept that “performance” is a part of the job.


Sign in / Sign up

Export Citation Format

Share Document