automated algorithms
Recently Published Documents


TOTAL DOCUMENTS

162
(FIVE YEARS 84)

H-INDEX

13
(FIVE YEARS 5)

2021 ◽  
Vol 47 (4) ◽  
pp. 1-36
Author(s):  
Cécile Daversin-Catty ◽  
Chris N. Richardson ◽  
Ada J. Ellingsrud ◽  
Marie E. Rognes

Mixed dimensional partial differential equations (PDEs) are equations coupling unknown fields defined over domains of differing topological dimension. Such equations naturally arise in a wide range of scientific fields including geology, physiology, biology, and fracture mechanics. Mixed dimensional PDEs are also commonly encountered when imposing non-standard conditions over a subspace of lower dimension, e.g., through a Lagrange multiplier. In this article, we present general abstractions and algorithms for finite element discretizations of mixed domain and mixed dimensional PDEs of codimension up to one (i.e., n D- m D with |n-m| ≤ 1). We introduce high-level mathematical software abstractions together with lower-level algorithms for expressing and efficiently solving such coupled systems. The concepts introduced here have also been implemented in the context of the FEniCS finite element software. We illustrate the new features through a range of examples, including a constrained Poisson problem, a set of Stokes-type flow models, and a model for ionic electrodiffusion.


Author(s):  
Yu.V. DMYTRENKO ◽  
Yu.V. HENZERSKYI ◽  
I.A. YAKOVENKO ◽  
Ye.A. BAKULIN

Problem statement. The problem of realization of the calculation method of normal cross-sections strength of reinforced concrete constructions under flat bending, which is established in the current building codes of Ukraine, is considered. The main attention is paid to atypical and practically not considered calculation cases, typical for automated algorithms in the environment of SP "LIRA SAPR". The purpose of the article. Analysis of the feasibility of using the calculation method of current building codes with further development of recommendations, based on the specifics of computerized calculations. Methodology. Within the framework of the performed research, rectangular cross-sections of reinforced concrete structures with single and double reinforcement (provided a significant increase in the area of reinforcement of the compressed cross-sectional area) with variation of concrete classes, reinforcement coefficient and ratio of reinforcement areas were considered. The stress-strain diagrams of concrete and reinforcement are bilinear with characteristic values set for the first group of limit states. The character of change of cross-sections’ status diagrams "M - εc(1) " is investigated. Research results. It is found that for single-reinforced sections with decreasing reinforcement area there is a decrease of the value of deformation of the compressed fiber of concrete, which is used to find solutions for systems of nonlinear equilibrium equations of the deformation method. This leads to an increase of the execution time of calculations of the flat elements’ reinforcement by the Wood method. It is established that for sections with double reinforcement at relatively large values of the ratios of the reinforcement areas, the equilibrium of the section is at the maximum deformations of the compressed concrete fiber. Conclusions. An approach aimed at accelerating the calculation of sections with single reinforcement, which is based on the use of the relationship between the percentage (area) of reinforcement and the deformation of the most compressed fiber of the reinforced concrete element. Features of analytical algorithms for calculating the selected sections are taken into account by implementing this technique in the PC "LIRA SAPR", optimization and acceleration of automated algorithms for calculating reinforced concrete structures.


2021 ◽  
Vol 13 (23) ◽  
pp. 4903
Author(s):  
Tomasz Niedzielski ◽  
Mirosława Jurecka ◽  
Bartłomiej Miziński ◽  
Wojciech Pawul ◽  
Tomasz Motyl

Recent advances in search and rescue methods include the use of unmanned aerial vehicles (UAVs), to carry out aerial monitoring of terrains to spot lost individuals. To date, such searches have been conducted by human observers who view UAV-acquired videos or images. Alternatively, lost persons may be detected by automated algorithms. Although some algorithms are implemented in software to support search and rescue activities, no successful rescue case using automated human detectors has been reported on thus far in the scientific literature. This paper presents a report from a search and rescue mission carried out by Bieszczady Mountain Rescue Service near the village of Cergowa in SE Poland, where a 65-year-old man was rescued after being detected via use of SARUAV software. This software uses convolutional neural networks to automatically locate people in close-range nadir aerial images. The missing man, who suffered from Alzheimer’s disease (as well as a stroke the previous day) spent more than 24 h in open terrain. SARUAV software was allocated to support the search, and its task was to process 782 nadir and near-nadir JPG images collected during four photogrammetric flights. After 4 h 31 min of the analysis, the system successfully detected the missing person and provided his coordinates (uploading 121 photos from a flight over a lost person; image processing and verification of hits lasted 5 min 48 s). The presented case study proves that the use of an UAV assisted by SARUAV software may quicken the search mission.


2021 ◽  
Vol 13 (22) ◽  
pp. 4584
Author(s):  
Luke Weidner ◽  
Gabriel Walton

Rockfall is a frequent hazard in mountainous areas, but risks can be mitigated by the construction of protection structures and slope modification. In this study, two rock slopes along a highway in western Colorado were monitored monthly using Terrestrial Laser Scanning (TLS) before, during, and after mitigation activities were performed to observe the influence of construction and weather variables on rockfall activity. Between September 2020 and February 2021, the slopes were mechanically scaled and reinforced using rock bolts, wire mesh, and polyurethane resin injection. We used a state-of-the-art TLS monitoring workflow to process the acquired point clouds, including semi-automated algorithms for alignment, change detection, clustering, and rockfall-volume calculation. Our initial hypotheses were that the slope-construction activities would have an immediate effect on the rockfall rate post-construction and would exhibit a decreased correlation with weather-related triggering factors, such as precipitation and freeze-thaw cycles. However, our observations did not confirm this, and instead an increase in post-construction rockfall was recorded, with strong correlation to weather-related triggering factors. While this does not suggest that the overall mitigation efforts were ineffective in reducing rockfall hazard and risk of large blocks, we did not find evidence that mitigation efforts influenced the rockfall hazard associated with the release of small- to medium-sized blocks (<1 m3). These results can be used to develop improved and tailored mitigation methods for rock slopes in the future.


2021 ◽  
Author(s):  
Mauro Silberberg ◽  
Hernán Edgardo Grecco

Quantitative analysis of high-throughput microscopy images requires robust automated algorithms. Background estimation is usually the first step and has an impact on all subsequent analysis, in particular for foreground detection and calculation of ratiometric quantities. Most methods recover only a single background value, such as the median. Those that aim to retrieve a background distribution by dividing the intensity histogram yield a biased estimation in images in non-trivial cases. In this work, we present the first method to recover an unbiased estimation of the background distribution directly from an image and without any additional input. Through a robust statistical test, our method leverages the lack of local spatial correlation in background pixels to select a subset of pixels that accurately represent the background distribution. This method is both fast and simple to implement, as it only uses standard mathematical operations and an averaging filter. Additionally, the only parameter, the size of the averaging filter, does not require fine tuning. The obtained background distribution can be used to test for foreground membership of individual pixels, or to estimate confidence intervals in derived quantities. We expect that the concepts described in this work can help to develop a novel family of robust segmentation methods.


Author(s):  
M Istasy ◽  
AG Schjetnan ◽  
O Talakoub ◽  
T Valiante

Background: Intracranial electroencephalography (iEEG) recordings are obtained from the sampling of sub-cortical structures and provide extraordinary insight into the spatiotemporal dynamics of the brain. As these recordings are increasingly obtained at higher channel counts and greater sampling frequencies, preprocessing through visual inspection is becoming untenable. Consequently, artificial neural networks (ANNs) are now being leveraged for this task. Methods: One-hour recordings from six patients diagnosed with drug-resistant epilepsy at Toronto Western Hospital were obtained alongside fiduciary ECG and EOG activity. R-wave peaks and local maxima were identified in the ECG and EOG recordings, respectively, and were time-mapped onto the iEEG recordings to delimit one-second epochs around 1.6 million cardiac and 600 thousand ocular artifacts. Epochs were then split into train-test-evaluation sets and fed into an ANN as one-second spectrograms (0 - 1,000 Hz) over 30-time steps. Results: The ANN model achieved formidable classification results on the evaluation set with an F1, positive predictive value, and sensitivity scores of 0.93. Furthermore, model architecture computed the classification probability at each time-step and enabled insight into the spatiotemporal features driving classification. Conclusions: We expect this research to promote the public sharing of new ANN from multiple institutions and enable novel automated algorithms for artifact detection in iEEG recordings.


2021 ◽  
Vol 11 (20) ◽  
pp. 9734
Author(s):  
Kristen M. Meiburger ◽  
Massimo Salvi ◽  
Giulia Rotunno ◽  
Wolfgang Drexler ◽  
Mengyang Liu

Optical coherence tomography angiography (OCTA) is a promising technology for the non-invasive imaging of vasculature. Many studies in literature present automated algorithms to quantify OCTA images, but there is a lack of a review on the most common methods and their comparison considering multiple clinical applications (e.g., ophthalmology and dermatology). Here, we aim to provide readers with a useful review and handbook for automatic segmentation and classification methods using OCTA images, presenting a comparison of techniques found in the literature based on the adopted segmentation or classification method and on the clinical application. Another goal of this study is to provide insight into the direction of research in automated OCTA image analysis, especially in the current era of deep learning.


Author(s):  
Farah Alsafar ◽  
Zong-Ming Li

Abstract Background The purpose of the study was to examine the coverage of thenar and hypothenar muscles on the transverse carpal ligament (TCL) in the radioulnar direction through in vivo ultrasound imaging of the carpal tunnel. We hypothesized that the TCL distance covered by the thenar muscle would be greater than that by the hypothenar muscle, and that total muscle coverage on the TCL would be greater than the TCL-alone region. Methods Ultrasound videos of human wrist were collected on 20 healthy subjects. Automated algorithms were used to extract the distal cross-sectional image of the trapezium-hamate level. Manual tracing of the anatomical features was conducted. Results Thenar muscles covered a significantly larger distance (11.9 ± 1.8 mm) as compared with hypothenar muscles (1.7 ± 0.8 mm) (p < 0.001). The TCL covered by thenar and hypothenar muscles was greater than the TCL-alone length (p < 0.001). The thenar and hypothenar muscle coverage on the TCL, as normalized to the total TCL length, was 61.0 ± 7.5%. Conclusions More than 50% of the TCL at the distal carpal tunnel is covered by thenar and hypothenar muscles. Knowledge of muscular attachments to the TCL improves our understanding of carpal tunnel syndrome etiology and can guide carpal tunnel release surgery.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Pratheeban Nambyiah ◽  
Andre E. X. Brown

AbstractAnaesthesia exposure to the developing nervous system causes neuroapoptosis and behavioural impairment in vertebrate models. Mechanistic understanding is limited, and target-based approaches are challenging. High-throughput methods may be an important parallel approach to drug-discovery and mechanistic research. The nematode worm Caenorhabditis elegans is an ideal candidate model. A rich subset of its behaviour can be studied, and hundreds of behavioural features can be quantified, then aggregated to yield a ‘signature’. Perturbation of this behavioural signature may provide a tool that can be used to quantify the effects of anaesthetic regimes, and act as an outcome marker for drug screening and molecular target research. Larval C. elegans were exposed to: isoflurane, ketamine, morphine, dexmedetomidine, and lithium (and combinations). Behaviour was recorded, and videos analysed with automated algorithms to extract behavioural features. Anaesthetic exposure during early development leads to persisting behavioural variation (in total, 125 features across exposure combinations). Higher concentrations, and combinations of isoflurane with ketamine, lead to persistent change in a greater number of features. Morphine and dexmedetomidine do not appear to lead to behavioural impairment. Lithium rescues the neurotoxic phenotype produced by isoflurane. Findings correlate well with vertebrate research: impairment is dependent on agent, is concentration-specific, is more likely with combination therapies, and can potentially be rescued by lithium. These results suggest that C. elegans may be an appropriate model with which to pursue phenotypic screens for drugs that mitigate the neurobehavioural impairment. Some possibilities are suggested for how high-throughput platforms might be organised in service of this field.


2021 ◽  
Vol 22 (18) ◽  
pp. 9971
Author(s):  
Matteo Ferro ◽  
Ottavio de Cobelli ◽  
Mihai Dorin Vartolomei ◽  
Giuseppe Lucarelli ◽  
Felice Crocetto ◽  
...  

Radiomics and genomics represent two of the most promising fields of cancer research, designed to improve the risk stratification and disease management of patients with prostate cancer (PCa). Radiomics involves a conversion of imaging derivate quantitative features using manual or automated algorithms, enhancing existing data through mathematical analysis. This could increase the clinical value in PCa management. To extract features from imaging methods such as magnetic resonance imaging (MRI), the empiric nature of the analysis using machine learning and artificial intelligence could help make the best clinical decisions. Genomics information can be explained or decoded by radiomics. The development of methodologies can create more-efficient predictive models and can better characterize the molecular features of PCa. Additionally, the identification of new imaging biomarkers can overcome the known heterogeneity of PCa, by non-invasive radiological assessment of the whole specific organ. In the future, the validation of recent findings, in large, randomized cohorts of PCa patients, can establish the role of radiogenomics. Briefly, we aimed to review the current literature of highly quantitative and qualitative results from well-designed studies for the diagnoses, treatment, and follow-up of prostate cancer, based on radiomics, genomics and radiogenomics research.


Sign in / Sign up

Export Citation Format

Share Document