automate data
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 11)

H-INDEX

3
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Sylvain Prigent ◽  
Cesar Augusto Valades-Cruz ◽  
Ludovic Leconte ◽  
Léo Maury ◽  
Jean Salamero ◽  
...  

Open science and FAIR principles have become major topics in the field of bioimaging. This is due to both new data acquisition technologies that generate large datasets, and new analysis approaches that automate data mining with high accuracy. Nevertheless, data are rarely shared and rigorously annotated because it requires a lot of manual and tedious management tasks and software packaging. We present BioImageIT, an open-source framework for integrating data management according to FAIR principles with data processing.


2021 ◽  
Author(s):  
Masha Medvedeva ◽  
Thijmen Dam ◽  
Martijn Wieling ◽  
Michel Vols

In this paper we attempt to identify eviction judgements within all case law published by Dutch courts in order to automate data collection, previously conducted manually. To do so we performed two experiments. The first focused on identifying judgements related to eviction, while the second focused on identifying the outcome of the cases in the judgements (eviction vs. dismissal of the landlord’s claim). In the process of conducting the experiments for this study, we have created a manually annotated dataset with eviction-related judgements and their outcomes.


2021 ◽  
Author(s):  
Alec Fraser ◽  
Nikolai S Prokhorov ◽  
John M Miller ◽  
Ekaterina S Knyazhanskaya ◽  
Petr G Leiman

Cryo-EM has made extraordinary headway towards becoming a semi-automated, high-throughput structure determination technique. In the general workflow, high-to-medium population states are grouped into two- and three-dimensional classes, from which structures can be obtained with near-atomic resolution and subsequently analyzed to interpret function. However, low population states, which are also functionally important, are often discarded. Here, we describe a technique whereby low population states can be efficiently identified with minimal human effort via a deep convolutional neural network classifier. We use this deep learning classifier to describe a transient, low population state of bacteriophage A511 in the midst of infecting its bacterial host. This method can be used to further automate data collection and identify other functionally important low population states.


2021 ◽  
Author(s):  
Michael Schwartz ◽  

Many companies have tried to automate data collection for handheld Digital Multimeters (DMM) using Optical Character Recognition (OCR). Only recently have companies tried to perform this task using Artificial Intelligence (AI) technology, Cal Lab Solutions being one of them in 2020. But when we developed our first prototype application, we discovered the difficulties of getting a good value with every measurement and test point.A year later, lessons learned and equipped with better software, this paper is a continuation of that AI project. In Beta-,1 we learned the difficulties of AI reading segmented displays. There are no pre-trained models for this type of display, so we needed to train a model. This required the testing of thousands of images, so we changed the scope of the project to a continual learning AI project. This paper will cover how we built our continuous learning AI model to show how any lab with a webcam can start automating those handheld DMMS with software that gets smarter over time.


2021 ◽  
Vol 110 ◽  
pp. 05009
Author(s):  
Rodion Filippov ◽  
Yuriy Leonov ◽  
Aleksandr Kuzmenko ◽  
Timofey Shestakov

The subject of the study is the analysis of social networks and the construction of an information and analytical system to automate data monitoring and mining. Modern social network analysis systems are reviewed, and the distinguishing features of these systems are given. Various methods of social network analysis and tasks that can be solved using these methods are described. The effectiveness of the methods to determine the text sentiment is compared.


2020 ◽  
pp. short8-1-short8-9
Author(s):  
Mikhail Ulizko ◽  
Evgeniy Antonov ◽  
Alexey Artamonov ◽  
Rufina Tukumbetova

The paper considers the task of analyzing complex interconnected objects using graph construction. There is no unified tool for constructing graphs. Some solutions can build graphs limited by the number of nodes, while others do not visually display data. The Gephi application was used to construct graphs for the research. Gephi has great functionality for building and analyzing graphs. The subject of research is a politician with a certain set of characteristics. In the paper an algorithm that enables to automate data collection on politicians was developed. One of the main methods of data collecting on the Internet is web scraping. Web scraping software may access the World Wide Web directly using the HTTP, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a software agent. The data was necessary for constructing graphs and their analysis. The use of graphs enables to see various types of relationships, including mediate. This methodology enables to change the attitude towards the analysis of multidi-mensional objects.


2020 ◽  
Author(s):  
Oliver Stringham ◽  
Adam Toomes ◽  
Aurelie M. Kanishka ◽  
Lewis Mitchell ◽  
Sarah Heinrich ◽  
...  

The unrivalled growth in e-commerce of animals and plants presents an unprecedented opportunity to monitor wildlife trade to inform conservation, biosecurity, and law enforcement. Using the Internet to quantify the scale of the wildlife trade (volume, frequency) is a relatively recent and rapidly developing approach, which currently lacks an accessible framework for locating relevant websites and collecting data. Here, we present an accessible guide for Internet-based wildlife trade surveillance, which uses a repeatable and systematic method to automate data collection from relevant websites. Our guide is adaptable to the multitude of trade-based contexts including different focal taxa or derived parts, and locations of interest. We provide information for working with the diversity of websites that trade wildlife, including social media platforms. Finally, we discuss the advantages and limitations of web data, including the challenges presented by trade occurring on clandestine sections of the Internet (e.g., deep and dark web).


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 1540-1540
Author(s):  
David Michael Waterhouse ◽  
Andrew Guinigundo ◽  
Aimee Brown ◽  
Dan Davies ◽  
Lauren Jones ◽  
...  

1540 Background: Pathogenic variants in BRCA1/BRCA2 can affect a breast CA pts care: preventative interventions, surgical decisions, medical treatments, screening, and family counseling. National data suggests significant non-adherence to NCCN testing guidelines, with only 1/3 of eligible pts referred for genetic services. In 2018, OHC (Cincinnati) launched an APP-centric genetics program. Specially trained APPs carry out genetic counseling and order NCCN-compliant testing. Early data suggested a significant deficit in physician-driven referrals. From 1/01/18 - 07/31/18, 138 new breast pts were estimated to be NCCN guideline-eligible. Only 28 (20%) pts received genetic services. Methods: In 2019, the OHC genetics team implemented a standardized screening process for every new breast CA pt. An EMR template (iKnowMed G2) that included NCCN guidelines was created for initial breast CA consultation and Oncology Care Model (OCM) treatment planning. All pts, not just OCM pts, are subject to OCM treatment planning. This automated screening method ensured all breast CA pts were screened, drastically increasing compliance. Through integration of genetics screening into the templates, pts meeting NCCN criteria for testing are reflexively referred for genetic counseling. With USON/McKesson, integrated data fields were developed in the EMR to automate data collection. Results: From 01/01/19 – 12/31/19, 717 new breast CA pts were seen at OHC. 676/717 (94%) were screened. Of those screened, 279 new breast CA pts met NCCN criteria for BRCA testing. 140 (50%) eligible new pts had appts with the genetics team. Another 50 (18%) had confirmed testing outside of OHC. 57 (20%) refused appts and/or testing. 32 (11%) did not have appts, representing screen fails. Referrals in non-breast CA pts also increased by 127%; 604 (2019) vs 264 (2018) suggesting a halo effect. Analyses suggest the program to be economically viable, with a financial growth rate of 127%. Conclusions: EMR templates embedded with the NCCN guidelines for reflex genetics referral can appropriately increase the utilization of genetic services. Breast genetics screening and resultant appt/testing rates increased significantly 2019 vs 2018. Success in BRCA testing in breast CA will lead to expansion to other cancers and genes. Implementation of structured EMR genetics data fields can automate data collection and measure compliance. Integration of genetics screening into universal OCM treatment planning is feasible, economically viable and scalable.


Sign in / Sign up

Export Citation Format

Share Document