require data
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 30)

H-INDEX

6
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260594
Author(s):  
Cassia Garcia Moraes Pagano ◽  
Tais de Campos Moreira ◽  
Daniel Sganzerla ◽  
Ana Maria Frölich Matzenbacher ◽  
Amanda Gomes Faria ◽  
...  

Telemedicine can be used to conduct ophthalmological assessment of patients, facilitating patient access to specialist care. Since the teleophthalmology models require data collection support from other health professionals, the purpose of our study was to assess agreement between the nursing technician and the ophthalmologist in acquisition of health parameters that can be used for remote analysis as part of a telemedicine strategy. A cross-sectional study was conducted with 140 patients referred to an ophthalmological telediagnosis center by primary healthcare doctors. The health parameters evaluated were visual acuity (VA), objective ophthalmic measures acquired by autorefraction, keratometry, and intraocular pressure (IOP). Bland-Altman plots were used to analyze agreement between the nursing technician and the ophthalmologist. The Bland-Altman analysis showed a mean bias equal to zero for the VA measurements [95%-LoA: -0.25–0.25], 0.01 [95%-LoA: -0.86–0.88] for spherical equivalent (M), -0.08 [95%-LoA: -1.1–0.95] for keratometry (K) and -0.23 [95%-LoA: -4.4–4.00] for IOP. The measures had a high linear correlation (R [95%CI]: 0.87 [0.82–0.91]; 0.97 [0.96–0.98]; 0.96 [0.95–0.97] and 0.88 [0.84–0.91] respectively). The results observed demonstrate that remote ophthalmological data collection by adequately trained health professionals is viable. This confirms the utility and safety of these solutions for scenarios in which access to ophthalmologists is limited.


2021 ◽  
Author(s):  
Robert Lundqvist

UNSTRUCTURED In many workplaces, tables, bar charts and run charts abound. The tools used are mostly common spreadsheet systems such as Microsoft Excel. These systems are used for different kinds of tasks. They are flexible and powerful, but there are also drawbacks. Many of these lie on a general level and result mostly in waste of time and resources, but other issues can definitely result in severe errors. There are still valid reasons to use spreadsheets for statistical purposes: licenses are available at low costs and some statistical functions are present. Despite their shortcomings, spreadsheets will not be replaced in any foreseeable future. They are used both for administrative and "analytical" purposes. In research, spreadsheets are also commonly used to share data. The goal is to point at some simple routines in common spreadsheets which still are not that commonly known, specifically those called pivot tables in Microsoft Excel. Better spreadsheet skills and knowledge of how to set up pivot tables could make it possible for both experienced analysts and staff with less statistical training to make everyday basic calculations with little effort and without specialized software. Moreover, since these routines require data to be set up in way in line with how data must be organized in statistical software, future use of such systems software is simplified. Furthermore, the necessary training can be achieved with limited resources. So _everyone_ should have pivot tables as part of their everyday skills.


2021 ◽  
Vol 23 (3) ◽  
pp. 586-592
Author(s):  
Anna Borucka ◽  
Dariusz Pyza

Road accidents are one of the basic road safety determinants. Most research covers large territorial areas. The results of such research do not take into account the differences between individual regions, which often leads to incorrect results and their interpretation. What makes it difficult to conduct analyses in a narrow territorial area is the small number of observations. The narrowing of the research area means that the number of accidents in time units is often very low. There are many zero observations in the data sets, which may affect the reliability of the research results. Such data are usually aggregated, which leads to information loss. The authors have therefore applied a model that addresses such problems. They proposed a method that does not require data aggregation and allows for the analysis of sets with an excess of zero observations. The presented model can be implemented in different territorial areas.


Author(s):  
Rajath C V

A autonomous car is also called a self-driving car or a robot car. As for the history of self-driving cars, radio technology was used to control the tests, which began in 1920, and later in 1950, the tracks were finally put in place. The present-day individual is habituated to automation technology and the use of robotics in areas such as agriculture, medication, transportation, IT industry, etc. In the recent decades, the automotive sector has come to the forefront of researching private car technologies. The independent Level-3 standard was out in 2020. Everyday automotive technology researchers solve challenges. The prime intention of the project is to create a self-driving car using Deep-Q-Networks, thus enabling the car to make decisions based on the spontaneously occurring events. Independent vehicles require data and are regularly updated, therefore IoT and AI can assist in allocating device data to the machine.


METIK JURNAL ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 71-78
Author(s):  
Renaldi Anwar ◽  
Fahrullah ◽  
Dedi Mirwansyah

PT ALTRAK 1978 Samarinda is a large branch that has several departments, including HR&GA, FA, Marketing, Part, Rebuild Center, Service and VOM (Vehicle Operation Management). The task of VOM is to maintain, repair and manage all operational vehicle assets owned by PT. ALTRAK 1978 at 10 Depos in the Kalimantan area, and one of the tasks of VOM is to create reports for all processes carried out at VOM and everything related to company operational vehicles, but many reports are made that require data from vehicle repairs such as operational vehicle repair history . When retrieving data from the repair history, we always experience difficulties because the data is incomplete and not integrated between several parts at PT. ALTRAK 1978 Samarinda and until now, if there are complaints about damage to operational vehicles, vehicle users still experience problems when they want to report damage to these vehicles because there is no administration system in every vehicle repair administration process. The formulation of the problem is how to make it easier for VOM to report vehicle repairs with a data source or repair history that can be seen and retrieved easily by several parties involved in the vehicle repair administration process with the aim of making it easier for VOM and vehicle users to receive and report vehicle damage quickly and in an integrated manner. each other between several departments.


2021 ◽  
Author(s):  
Vincent Calcagno ◽  
Nik Cunniffe ◽  
Frederic M Hamelin

Many methods attempt to detect species associations from co-occurrence patterns. Such associations are then typically used to infer inter-specific interactions. However, correlation is not equivalent to interaction. Habitat heterogeneity and out-of-equilibrium colonization histories are acknowledged to cause species associations even when inter-specific interactions are absent. Here we show how classical metacommunity dynamics, within a homogeneous habitat at equilibrium, can also lead to statistical associations. This occurs even when species do not interact. All that is required is patch disturbance (i.e. simultaneous extinction of several species in a patch) a common phenomenon in a wide range of real systems. We compare direct tests of pairwise independence, matrix permutation approaches and joint species distribution modelling. We use mathematical analysis and example simulations to show that patch disturbance leads all these methods to produce characteristic signatures of spurious association from "null" co-occurrence matrices. Including patch age (i.e. the time since the last patch disturbance event) as a covariate is necessary to resolve this artefact. However, this would require data that very often are not available in practice for these types of analyses. We contend that patch disturbance is a key (but hitherto overlooked) factor which must be accounted for when analysing species co-occurrence.


2021 ◽  
Author(s):  
Anne Brown

Ride-hailing is a source of opportunity and consternation for cities, presenting both opportunities to expand mobility and added challenges such as increased congestion. In an effort to curb congestion—and to tap a new source of revenue—cities and states across the US have imposed fees on ride-hail trips. Fees are far from uniform; the fee bases, amounts levied, and use of these funds once collected range considerably between cities and states. Despite previous research that toll and transit fare structures affect equity, no research to date has examined the equity implications of ride-hail fee structures. This paper addresses that gap in understanding and asks: what are the equity implications of different ride-hail fee structures? I answer this question using trip-level data from over 97 million ride-hail trips taken in 2018 and 2019 in the City of Chicago. Examining trips serving low-, middle-, and high-income neighborhoods, I examine equity implications under four different fee scenarios: 1) flat rate, 2) percentage of fare, 3) varied rate for pooled trips, and 4) per mile fees. Fees that charge a percentage of total fare deliver a more progressive fee compared to flat, per-mile, or pool-differentiated fees. Yet stark income differences between neighborhoods means that even varied fees remain regressive with respect to income. Cities or states considering ride-hail fees should start by identifying concrete equity-first goals and design fee structures to achieve these goals. Cities should identify and require data needed to assess progress on identified goals and adjust fees as needed to better align with these goals. Far from being a silver bullet for issues like increasing congestion or bolstering transit funding, ride-hail fees should be seen as just one among a broader suite of policies needed to realize broader city goals.


2021 ◽  
Vol 14 (8) ◽  
pp. 1392-1400
Author(s):  
Sagar Bharadwaj ◽  
Praveen Gupta ◽  
Ranjita Bhagwan ◽  
Saikat Guha

Analysts frequently require data from multiple sources for their tasks, but finding these sources is challenging in exabyte-scale data lakes. In this paper, we address this problem for our enterprise's data lake by using machine-learning to identify related data sources. Leveraging queries made to the data lake over a month, we build a relevance model that determines whether two columns across two data streams are related or not. We then use the model to find relations at scale across tens of millions of column-pairs and thereafter construct a data relationship graph in a scalable fashion, processing a data lake that has 4.5 Petabytes of data in approximately 80 minutes. Using manually labeled datasets as ground-truth, we show that our techniques show improvements of at least 23% when compared to state-of-the-art methods.


Author(s):  
Shravankumar Venumula ◽  
◽  
Senthil Ramadoss ◽  

This paper recommended response is to receive encrypted sensitive text data on personal data electronics that take control of the combination of both procedures: Steganography and cryptography. The security of the system is provided through the contribution of video-based asymmetric key cryptography followed by two sequential layers of steganography to insure security also with the best positive effects out of the latter. The experiment modeled the method and simulated it. It was developed to be studied to analyze the relationship. Between protection, skill, and concentration on data. The studies require data retention checking apps of 10 various widths showing fun video effects. The report provides capacity changes with protection, as an undesirable tradeoff enforced. The uniqueness of the work is presented in the showcase of different measures that make it hard for the service provider and the application to choose the maker of the decision. The tests given are all 1-LSB privacy awareness possibilities, 2-LSB and 3-LSB methods that detail their video interaction on the cover. The core results demonstrate to be the applicability of the 3-LSB method to be enacted offers good adequate safeguards with realistic skill preferred to win 3-LSB for 1- LSB and 2-LSB techniques


Author(s):  
Alexander Senf ◽  
Robert Davies ◽  
Frédéric Haziza ◽  
John Marshall ◽  
Juan Troncoso-Pastoriza ◽  
...  

Abstract Motivation The majority of genome analysis tools and pipelines require data to be decrypted for access. This potentially leaves sensitive genetic data exposed, either because the unencrypted data is not removed after analysis, or because the data leaves traces on the permanent storage medium. Results : We defined a file container specification enabling direct byte-level compatible random access to encrypted genetic data stored in community standards such as SAM/BAM/CRAM/VCF/BCF. By standardizing this format, we show how it can be added as a native file format to genomic libraries, enabling direct analysis of encrypted data without the need to create a decrypted copy. Availability and implementation The Crypt4GH specification can be found at: http://samtools.github.io/hts-specs/crypt4gh.pdf. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document