manual intervention
Recently Published Documents


TOTAL DOCUMENTS

271
(FIVE YEARS 151)

H-INDEX

13
(FIVE YEARS 6)

2022 ◽  
Vol 14 (2) ◽  
pp. 308
Author(s):  
Zhao Zhan ◽  
Wenzhong Shi ◽  
Min Zhang ◽  
Zhewei Liu ◽  
Linya Peng ◽  
...  

Landslide trails are important elements of landslide inventory maps, providing valuable information for landslide risk and hazard assessment. Compared with traditional manual mapping, skeletonization methods offer a more cost-efficient way to map landslide trails, by automatically generating centerlines from landslide polygons. However, a challenge to existing skeletonization methods is that expert knowledge and manual intervention are required to obtain a branchless skeleton, which limits the applicability of these methods. To address this problem, a new workflow for landslide trail extraction (LTE) is proposed in this study. To avoid generating redundant branches and to improve the degree of automation, two endpoints, i.e., the crown point and the toe point, of the trail were determined first, with reference to the digital elevation model. Thus, a fire extinguishing model (FEM) is proposed to generate skeletons without redundant branches. Finally, the effectiveness of the proposed method is verified, by extracting landslide trails from landslide polygons of various shapes and sizes, in two study areas. Experimental results show that, compared with the traditional grassfire model-based skeletonization method, the proposed FEM is capable of obtaining landslide trails without spurious branches. More importantly, compared with the baseline method in our previous work, the proposed LTE workflow can avoid problems including incompleteness, low centrality, and direction errors. This method requires no parameter tuning and yields excellent performance, and is thus highly valuable for practical landslide mapping.


2022 ◽  
Author(s):  
Yuquan Li ◽  
Chang-Yu Hsieh ◽  
Ruiqiang Lu ◽  
Xiaoqing Gong ◽  
Xiaorui Wang ◽  
...  

Abstract Improving drug discovery efficiency is a core and long-standing challenge in drug discovery. For this purpose, many graph learning methods have been developed to search potential drug candidates with fast speed and low cost. In fact, the pursuit of high prediction performance on a limited number of datasets has crystallized them, making them lose advantage in repurposing to new data generated in drug discovery. Here we propose a flexible method that can adapt to any dataset and make accurate predictions. The proposed method employs an adaptive pipeline to learn from a dataset and output a predictor. Without any manual intervention, the method achieves far better prediction performance on all tested datasets than traditional methods, which are based on hand-designed neural architectures and other fixed items. In addition, we found that the proposed method is more robust than traditional methods and can provide meaningful interpretability. Given the above, the proposed method can serve as a reliable method to predict molecular interactions and properties with high adaptability, performance, robustness and interpretability. This work would take a solid step forward to the purpose of aiding researchers to design better drugs with high efficiency.


Author(s):  
Maged Farouk

Artificial intelligence (AI) has already changed the world and has made an effective impact in a range of fields including industry, criminal law, health, national security, transport, nanotechnology, intelligent cities as well as issues such as algorithms and access to the data.  This paper shows how these technologies are a great asset to humans and are programmed to reduce human effort as much as possible. They tend to possess the capability to work in an automated fashion. Therefore, manual intervention is the last thing that could be asked for or seen while operating parts associated with this technology. As well as the paper shows the different universal efforts of AI techniques to face the pandemic of COVID-19.


Author(s):  
Hassan Iqbal ◽  
Ayesha Khalid ◽  
Muhammad Shahzad

Cloud gaming platforms have witnessed tremendous growth over the past two years with a number of large Internet companies including Amazon, Facebook, Google, Microsoft, and Nvidia publicly launching their own platforms. While cloud gaming platforms continue to grow, the visibility in their performance and relative comparison is lacking. This is largely due to absence of systematic measurement methodologies which can generally be applied. As such, in this paper, we implement DECAF, a methodology to systematically analyze and dissect the performance of cloud gaming platforms across different game genres and game platforms. DECAF is highly automated and requires minimum manual intervention. By applying DECAF, we measure the performance of three commercial cloud gaming platforms including Google Stadia, Amazon Luna, and Nvidia GeForceNow, and uncover a number of important findings. First, we find that processing delays in the cloud comprise majority of the total round trip delay experienced by users, accounting for as much as 73.54% of total user-perceived delay. Second, we find that video streams delivered by cloud gaming platforms are characterized by high variability of bitrate, frame rate, and resolution. Platforms struggle to consistently serve 1080p/60 frames per second streams across different game genres even when the available bandwidth is 8-20× that of platform's recommended settings. Finally, we show that game platforms exhibit performance cliffs by reacting poorly to packet losses, in some cases dramatically reducing the delivered bitrate by up to 6.6× when loss rates increase from 0.1% to 1%. Our work has important implications for cloud gaming platforms and opens the door for further research on comprehensive measurement methodologies for cloud gaming.


2021 ◽  
Vol 923 (1) ◽  
pp. 20
Author(s):  
Xiaoying Pang ◽  
Zeqiu Yu ◽  
Shih-Yun Tang ◽  
Jongsuk Hong ◽  
Zhen Yuan ◽  
...  

Abstract We identify hierarchical structures in the Vela OB2 complex and the cluster pair Collinder 135 and UBC 7 with Gaia EDR3 using the neural network machine-learning algorithm StarGO. Five second-level substructures are disentangled in Vela OB2, which are referred to as Huluwa 1 (Gamma Velorum), Huluwa 2, Huluwa 3, Huluwa 4, and Huluwa 5. For the first time, Collinder 135 and UBC 7 are simultaneously identified as constituent clusters of the pair with minimal manual intervention. We propose an alternative scenario in which Huluwa 1–5 have originated from sequential star formation. The older clusters Huluwa 1–3, with an age of 10–22 Myr, generated stellar feedback to cause turbulence that fostered the formation of the younger-generation Huluwa 4–5 (7–20 Myr). A supernova explosion located inside the Vela IRAS shell quenched star formation in Huluwa 4–5 and rapidly expelled the remaining gas from the clusters. This resulted in global mass stratification across the shell, which is confirmed by the regression discontinuity method. The stellar mass in the lower rim of the shell is 0.32 ± 0.14 M ⊙ higher than in the upper rim. Local, cluster-scale mass segregation is observed in the lowest-mass cluster Huluwa 5. Huluwa 1–5 (in Vela OB2) are experiencing significant expansion, while the cluster pair suffers from moderate expansion. The velocity dispersions suggest that all five groups (including Huluwa 1A and Huluwa 1B) in Vela OB2 and the cluster pair are supervirial and are undergoing disruption, and also that Huluwa 1A and Huluwa 1B may be a coeval young cluster pair. N-body simulations predict that Huluwa 1–5 in Vela OB2 and the cluster pair will continue to expand in the future 100 Myr and eventually dissolve.


2021 ◽  
Vol 10 (6) ◽  
pp. 3052-3063
Author(s):  
Jumana A. Hassan ◽  
Basil H. Jasim

Many modern monitoring and controlling projects such as systems in factories, home, and other used the internet of things (IoT). These devices perform self-functions without requiring manual intervention in order to improve convenience and safety. Electrical networks are one of the most important areas in which IoT systems can control, monitor, detect, and alarm for faultier, because detecting faults, monitoring network data, and finding the best solutions in a smaller duration of time to improve the efficiency and reliability of electrical networks. This paper proposes a system on the basis of a wireless sensor network (WSN). This system monitors and controls a variety of electrical and environmental variables, including power consumption, weather temperature, humidity, flame, lighting, and detection cut in the cable in electrical poles. Each sensor is a node and is connected to a microcontroller board separately. The data collected by these sensors is display and monitored on a web page and saved in a local server's database, this site was created with a variety of web programming languages. The system was developed using a free global domain. The website having a database for storing real-time sensor information.


2021 ◽  
Author(s):  
Shraddha Birari ◽  
Sukhada Bhingarkar

Source code summarization is the methodology of generating the description from the source code. The summary of the source code gives the brief idea of the functionality performed by the source code. Summary of the code is always necessary for software maintenance. Summaries are not only beneficial for software maintenance but also for code categorization and retrieval. Generation of summary in an automated fashion instead of manual intervention can save the time and efforts. Artificial Intelligence is a very popular branch in the field of computer science that demonstrates machine intelligence and covers a wide range of applications. This paper focuses on the use of Artificial Intelligence for source code summarization. Natural Language Processing (NLP) and Machine Learning (ML) are considered to be the subsets of Artificial Intelligence. Thus, this paper presents a critical review of various NLP and ML techniques implemented so far for generating summaries from the source code and points out research challenges in this field.


Author(s):  
Shibaprasad Sen ◽  
Ankan Bhattacharyya ◽  
Ram Sarkar ◽  
Kaushik Roy

The work reported in this article deals with the ground truth generation scheme for online handwritten Bangla documents at text-line, word, and stroke levels. The aim of the proposed scheme is twofold: firstly, to build a document level database so that future researchers can use the database to do research in this field. Secondly, the ground truth information will help other researchers to evaluate the performance of their algorithms developed for text-line extraction, word extraction, word segmentation, stroke recognition, and word recognition. The reported ground truth generation scheme starts with text-line extraction from the online handwritten Bangla documents, then words extraction from the text-lines, and finally segmentation of those words into basic strokes. After word segmentation, the basic strokes are assigned appropriate class labels by using modified distance-based feature extraction procedure and the MLP ( Multi-layer Perceptron ) classifier. The Unicode for the words are then generated from the sequence of stroke labels. XML files are used to store the stroke, word, and text-line levels ground truth information for the corresponding documents. The proposed system is semi-automatic and each step such as text-line extraction, word extraction, word segmentation, and stroke recognition has been implemented by using different algorithms. Thus, the proposed ground truth generation procedure minimizes huge manual intervention by reducing the number of mouse clicks required to extract text-lines, words from the document, and segment the words into basic strokes. The integrated stroke recognition module also helps to minimize the manual labor needed to assign appropriate stroke labels. The freely available and can be accessed at https://byanjon.herokuapp.com/ .


2021 ◽  
Vol 13 (23) ◽  
pp. 4823
Author(s):  
Cheng Shi ◽  
Yenan Dang ◽  
Li Fang ◽  
Zhiyong Lv ◽  
Huifang Shen

Multi-sensor image can provide supplementary information, usually leading to better performance in classification tasks. However, the general deep neural network-based multi-sensor classification method learns each sensor image separately, followed by a stacked concentrate for feature fusion. This way requires a large time cost for network training, and insufficient feature fusion may cause. Considering efficient multi-sensor feature extraction and fusion with a lightweight network, this paper proposes an attention-guided classification method (AGCNet), especially for multispectral (MS) and panchromatic (PAN) image classification. In the proposed method, a share-split network (SSNet) including a shared branch and multiple split branches performs feature extraction for each sensor image, where the shared branch learns basis features of MS and PAN images with fewer learn-able parameters, and the split branch extracts the privileged features of each sensor image via multiple task-specific attention units. Furthermore, a selective classification network (SCNet) with a selective kernel unit is used for adaptive feature fusion. The proposed AGCNet can be trained by an end-to-end fashion without manual intervention. The experimental results are reported on four MS and PAN datasets, and compared with state-of-the-art methods. The classification maps and accuracies show the superiority of the proposed AGCNet model.


2021 ◽  
Vol 12 ◽  
Author(s):  
Tiago Moreira ◽  
Alexander Furnica ◽  
Elke Daemen ◽  
Michael V. Mazya ◽  
Christina Sjöstrand ◽  
...  

Introduction: Starting reperfusion therapies as early as possible in acute ischemic strokes are of utmost importance to improve outcomes. The Comprehensive Stroke Centers (CSCs) can use surveys, shadowing personnel or perform journal analysis to improve logistics, which can be labor intensive, lack accuracy, and disturb the staff by requiring manual intervention. The aim of this study was to measure transport times, facility usage, and patient–staff colocalization with an automated real-time location system (RTLS).Patients and Methods: We tested IR detection of patient wristbands and staff badges in parallel with a period when the triage of stroke patients was changed from admission to the emergency room (ER) to direct admission to neuroradiology.Results: In total, 281 patients were enrolled. In 242/281 (86%) of cases, stroke patient logistics could be detected. Consistent patient–staff colocalizations were detected in 177/281 (63%) of cases. Bypassing the ER led to a significant decrease in median time neurologists spent with patients (from 15 to 9 min), but to an increase of the time nurses spent with patients (from 13 to 22 min; p = 0.036). Ischemic stroke patients used the most staff time (median 25 min) compared to hemorrhagic stroke patients (median 13 min) and stroke mimics (median 15 min).Discussion: Time spent with patients increased for nurses, but decreased for neurologists after direct triage to the CSC. While lower in-hospital transport times were detected, time spent in neuroradiology (CT room and waiting) remained unchanged.Conclusion: The RTLS could be used to measure the timestamps in stroke pathways and assist in staff allocation.


Sign in / Sign up

Export Citation Format

Share Document