ASSESSMENT OF FORESTRY ACTIVITY OF UKRAINE IN THE CONTEXT OF TIME REQUIREMENTS

Author(s):  
S.S. Makarenko
Keyword(s):  
VASA ◽  
2019 ◽  
Vol 48 (6) ◽  
pp. 516-522 ◽  
Author(s):  
Verena Mayr ◽  
Mirko Hirschl ◽  
Peter Klein-Weigel ◽  
Luka Girardi ◽  
Michael Kundi

Summary. Background: For diagnosis of peripheral arterial occlusive disease (PAD), a Doppler-based ankle-brachial-index (dABI) is recommended as the first non-invasive measurement. Due to limitations of dABI, oscillometry might be used as an alternative. The aim of our study was to investigate whether a semi-automatic, four-point oscillometric device provides comparable diagnostic accuracy. Furthermore, time requirements and patient preferences were evaluated. Patients and methods: 286 patients were recruited for the study; 140 without and 146 with PAD. The Doppler-based (dABI) and oscillometric (oABI and pulse wave index – PWI) measurements were performed on the same day in a randomized cross-over design. Specificity and sensitivity against verified PAD diagnosis were computed and compared by McNemar tests. ROC analyses were performed and areas under the curve were compared by non-parametric methods. Results: oABI had significantly lower sensitivity (65.8%, 95% CI: 59.2%–71.9%) compared to dABI (87.3%, CI: 81.9–91.3%) but significantly higher specificity (79.7%, 74.7–83.9% vs. 67.0%, 61.3–72.2%). PWI had a comparable sensitivity to dABI. The combination of oABI and PWI had the highest sensitivity (88.8%, 85.7–91.4%). ROC analysis revealed that PWI had the largest area under the curve, but no significant differences between oABI and dABI were observed. Time requirement for oABI was significantly shorter by about 5 min and significantly more patients would prefer oABI for future testing. Conclusions: Semi-automatic oABI measurements using the AngER-device provide comparable diagnostic results to the conventional Doppler method while PWI performed best. The time saved by oscillometry could be important, especially in high volume centers and epidemiologic studies.


2003 ◽  
Author(s):  
E. Lee ◽  
C. Feigley ◽  
J. Hussey ◽  
J. Khan ◽  
M. Ahmed

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


2021 ◽  
Vol 13 ◽  
pp. 175682932110048
Author(s):  
Huajun Song ◽  
Yanqi Wu ◽  
Guangbing Zhou

With the rapid development of drones, many problems have arisen, such as invasion of privacy and endangering security. Inspired by biology, in order to achieve effective detection and robust tracking of small targets such as unmanned aerial vehicles, a binocular vision detection system is designed. The system is composed of long focus and wide-angle dual cameras, servo pan tilt, and dual processors for detecting and identifying targets. In view of the shortcomings of spatio-temporal context target tracking algorithm that cannot adapt to scale transformation and easy to track failure in complex scenes, the scale filter and loss criterion are introduced to make an improvement. Qualitative and quantitative experiments show that the designed system can adapt to the scale changes and partial occlusion conditions in the detection, and meets the real-time requirements. The hardware system and algorithm both have reference value for the application of anti-unmanned aerial vehicle systems.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


1996 ◽  
Vol 15 (6) ◽  
pp. 497-503 ◽  
Author(s):  
T. Soriano ◽  
M. Menéndez ◽  
P. Sanz ◽  
M. Repetto

1 The described analytical procedure permits the simultaneous determination of the main n-hexane meta bolites in urine. 2-Hexanone, 2-hexanol, 2, 5-hexanediol and 2, 5-hexanedione, were chosen to dose the rats used in this study. All urine samples were collected and analysed on a daily basis, before and after acidic hydrolysis (pH 0.1) by GC/MS. 2-Hexanone, 2, 5-dimethylfurane, γ-valerolac tone and 2, 5-hexanedione were determined before hydro lysis ; 2-hexanol and 2, 5-hexanediol, after hydrolysis; and 5-hydroxy-2-hexanone and 4, 5-dihydroxy-2-hexanone were calculated by the difference between γ-valerolactone and 2, 5-hexanedione with and without hydrolysis, respectively. 2 A metabolic scheme was proposed reflecting the biotransformations undergone by the four compounds assayed. We consider 2, 5-dimethylfurane as a 'true metabolite' because the quantities detected were always greater before hydrolysis. 3 It has been reported that human and rat n-hexane metabolism follow a similar pattern. Therefore, as a practical application and without increasing either sample or time requirements, the simultaneous quantifi cation of the different metabolites and their excretion profile could provide better information about the metabolic situation of exposed workers than the determi nation of 2, 5-hexanedione alone. According to our experimental results, 4, 5-dihydroxy-2-hexanone itself would be a good toxicity indicator.


2010 ◽  
Vol 47 (11) ◽  
pp. 1299-1304 ◽  
Author(s):  
Reed B. Freeman ◽  
Chad A. Gartrell ◽  
Lillian D. Wakeley ◽  
Ernest S. Berney ◽  
Julie R. Kelley

The density of soil is crucial in engineering, construction, and research. Standard methods to determine density use procedures, equipment or expendable materials that limit their effectiveness in challenging field conditions. Some methods require burdensome logistics or have time requirements that limit their use or the number of tests that can be executed. A test method, similar to the sand-cone method, was developed that uses steel shot as the material to which a volume of soil is compared to calculate soil density. Steel shot is easily recovered and reused, eliminating the need for specialty sand and calibrated cones or containers, and allows rapid determination of the volume of displaced soil. Excavated soil also provides measurements of total mass and moisture content. Volume, mass, and moisture content are applied in simple calculations to determine wet and dry densities and unit weight of the soil. Proficiency in performing the test can be achieved with minimal training, and the required kit can be assembled for a reasonable cost. Field uses of the method in dry environments in a variety of soil types demonstrated that the method can produce repeatable results within 2% of the values of soil density determined by traditional methods, with advantages in logistics.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document