scholarly journals Record Linkage of Chinese Patent Inventors and Authors of Scientific Articles

2021 ◽  
Vol 11 (18) ◽  
pp. 8417
Author(s):  
Robert Nowak ◽  
Wiktor Franus ◽  
Jiarui Zhang ◽  
Yue Zhu ◽  
Xin Tian ◽  
...  

We present an algorithm to find corresponding authors of patents and scientific articles. The authors are given as records in Scopus and the Chinese Patents Database. This issue is known as the record linkage problem, defined as finding and linking individual records from separate databases that refer to the same real-world entity. The presented solution is based on a record linkage framework combined with text feature extraction and machine learning techniques. The main challenges were low data quality, lack of common record identifiers, and a limited number of other attributes shared by both data sources. Matching based solely on an exact comparison of authors’ names does not solve the records linking problem because many Chinese authors share the same full name. Moreover, the English spelling of Chinese names is not standardized in the analyzed data. Three ideas on how to extend attribute sets and improve record linkage quality were proposed: (1) fuzzy matching of names, (2) comparison of abstracts of patents and articles, (3) comparison of scientists’ main research areas calculated using all metadata available. The presented solution was evaluated in terms of matching quality and complexity on ≈250,000 record pairs linked by human experts. The results of numerical experiments show that the proposed strategies increase the quality of record linkage compared to typical solutions.

Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


Work ◽  
2021 ◽  
pp. 1-12
Author(s):  
Zhang Mengqi ◽  
Wang Xi ◽  
V.E. Sathishkumar ◽  
V. Sivakumar

BACKGROUND: Nowadays, the growth of smart cities is enhanced gradually, which collects a lot of information and communication technologies that are used to maximize the quality of services. Even though the intelligent city concept provides a lot of valuable services, security management is still one of the major issues due to shared threats and activities. For overcoming the above problems, smart cities’ security factors should be analyzed continuously to eliminate the unwanted activities that used to enhance the quality of the services. OBJECTIVES: To address the discussed problem, active machine learning techniques are used to predict the quality of services in the smart city manages security-related issues. In this work, a deep reinforcement learning concept is used to learn the features of smart cities; the learning concept understands the entire activities of the smart city. During this energetic city, information is gathered with the help of security robots called cobalt robots. The smart cities related to new incoming features are examined through the use of a modular neural network. RESULTS: The system successfully predicts the unwanted activity in intelligent cities by dividing the collected data into a smaller subset, which reduces the complexity and improves the overall security management process. The efficiency of the system is evaluated using experimental analysis. CONCLUSION: This exploratory study is conducted on the 200 obstacles are placed in the smart city, and the introduced DRL with MDNN approach attains maximum results on security maintains.


2021 ◽  
Vol 11 (7) ◽  
pp. 317
Author(s):  
Ismael Cabero ◽  
Irene Epifanio

This paper presents a snapshot of the distribution of time that Spanish academic staff spend on different tasks. We carry out a statistical exploratory study by analyzing the responses provided in a survey of 703 Spanish academic staff in order to draw a clear picture of the current situation. This analysis considers many factors, including primarily gender, academic ranks, age, and academic disciplines. The tasks considered are divided into smaller activities, which allows us to discover hidden patterns. Tasks are not only restricted to the academic world, but also relate to domestic chores. We address this problem from a totally new perspective by using machine learning techniques, such as cluster analysis. In order to make important decisions, policymakers must know how academic staff spend their time, especially now that legal modifications are planned for the Spanish university environment. In terms of the time spent on quality of teaching and caring tasks, we expose huge gender gaps. Non-recognized overtime is very frequent.


2018 ◽  
Vol 27 (03) ◽  
pp. 1850011 ◽  
Author(s):  
Athanasios Tagaris ◽  
Dimitrios Kollias ◽  
Andreas Stafylopatis ◽  
Georgios Tagaris ◽  
Stefanos Kollias

Neurodegenerative disorders, such as Alzheimer’s and Parkinson’s, constitute a major factor in long-term disability and are becoming more and more a serious concern in developed countries. As there are, at present, no effective therapies, early diagnosis along with avoidance of misdiagnosis seem to be critical in ensuring a good quality of life for patients. In this sense, the adoption of computer-aided-diagnosis tools can offer significant assistance to clinicians. In the present paper, we provide in the first place a comprehensive recording of medical examinations relevant to those disorders. Then, a review is conducted concerning the use of Machine Learning techniques in supporting diagnosis of neurodegenerative diseases, with reference to at times used medical datasets. Special attention has been given to the field of Deep Learning. In addition to that, we communicate the launch of a newly created dataset for Parkinson’s disease, containing epidemiological, clinical and imaging data, which will be publicly available to researchers for benchmarking purposes. To assess the potential of the new dataset, an experimental study in Parkinson’s diagnosis is carried out, based on state-of-the-art Deep Neural Network architectures and yielding very promising accuracy results.


10.2196/16344 ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. e16344 ◽  
Author(s):  
Giacomo Valle

Decades of technological developments have populated the field of brain-machine interfaces and neuroprosthetics with several replacement strategies, neural modulation treatments, and rehabilitation techniques to improve the quality of life for patients affected by sensory and motor disabilities. This field is now quickly expanding thanks to advances in neural interfaces, machine learning techniques, and robotics. Despite many clinical successes, and multiple innovations in animal models, brain-machine interfaces remain mainly confined to sophisticated laboratory environments indicating a necessary step forward in the used technology. Interestingly, Elon Musk and Neuralink have recently presented a new brain-machine interface platform with thousands of channels, fast implantation, and advanced signal processing. Here, how their work takes part in the context of the restoration of sensory-motor functions through neuroprostheses is commented.


Author(s):  
Anna Ferrante ◽  
James Boyd ◽  
Sean Randall ◽  
Adrian Brown ◽  
James Semmens

ABSTRACT ObjectivesRecord linkage is a powerful technique which transforms discrete episode data into longitudinal person-based records. These records enable the construction and analysis of complex pathways of health and disease progression, and service use. Achieving high linkage quality is essential for ensuring the quality and integrity of research based on linked data. The methods used to assess linkage quality will depend on the volume and characteristics of the datasets involved, the processes used for linkage and the additional information available for quality assessment. This paper proposes and evaluates two methods to routinely assess linkage quality. ApproachLinkage units currently use a range of methods to measure, monitor and improve linkage quality; however, no common approach or standards exist. There is an urgent need to develop “best practices” in evaluating, reporting and benchmarking linkage quality. In assessing linkage quality, of primary interest is in knowing the number of true matches and non-matches identified as links and non-links. Any misclassification of matches within these groups introduces linkage errors. We present efforts to develop sharable methods to measure linkage quality in Australia. This includes a sampling-based method to estimate both precision (accuracy) and recall (sensitivity) following record linkage and a benchmarking method - a transparent and transportable methodology to benchmark the quality of linkages across different operational environments. ResultsThe sampling-based method achieved estimates of linkage quality that were very close to actual linkage quality metrics. This method presents as a feasible means of accurately estimating matching quality and refining linkages in population level linkage studies. The benchmarking method provides a systematic approach to estimating linkage quality with a set of open and shareable datasets and a set of well-defined, established performance metrics. The method provides an opportunity to benchmark the linkage quality of different record linkage operations. Both methods have the potential to assess the inter-rater reliability of clerical reviews. ConclusionsBoth methods produce reliable estimates of linkage quality enabling the exchange of information within and between linkage communities. It is important that researchers can assess risk in studies using record linkage techniques. Understanding the impact of linkage quality on research outputs highlights a need for standard methods to routinely measure linkage quality. These two methods provide a good start to the quality process, but it is important to identify standards and good practices in all parts of the linkage process (pre-processing, standardising activities, linkage, grouping and extracting).


2021 ◽  
Vol 15 ◽  
Author(s):  
Jesús Leonardo López-Hernández ◽  
Israel González-Carrasco ◽  
José Luis López-Cuadrado ◽  
Belén Ruiz-Mezcua

Nowadays, the recognition of emotions in people with sensory disabilities still represents a challenge due to the difficulty of generalizing and modeling the set of brain signals. In recent years, the technology that has been used to study a person’s behavior and emotions based on brain signals is the brain–computer interface (BCI). Although previous works have already proposed the classification of emotions in people with sensory disabilities using machine learning techniques, a model of recognition of emotions in people with visual disabilities has not yet been evaluated. Consequently, in this work, the authors present a twofold framework focused on people with visual disabilities. Firstly, auditory stimuli have been used, and a component of acquisition and extraction of brain signals has been defined. Secondly, analysis techniques for the modeling of emotions have been developed, and machine learning models for the classification of emotions have been defined. Based on the results, the algorithm with the best performance in the validation is random forest (RF), with an accuracy of 85 and 88% in the classification for negative and positive emotions, respectively. According to the results, the framework is able to classify positive and negative emotions, but the experimentation performed also shows that the framework performance depends on the number of features in the dataset and the quality of the Electroencephalogram (EEG) signals is a determining factor.


Author(s):  
Tolga Ensari ◽  
Melike Günay ◽  
Yağız Nalçakan ◽  
Eyyüp Yildiz

Machine learning is one of the most popular research areas, and it is commonly used in wireless communications and networks. Security and fast communication are among of the key requirements for next generation wireless networks. Machine learning techniques are getting more important day-by-day since the types, amount, and structure of data is continuously changing. Recent developments in smart phones and other devices like drones, wearable devices, machines with sensors need reliable communication within internet of things (IoT) systems. For this purpose, artificial intelligence can increase the security and reliability and manage the data that is generated by the wireless systems. In this chapter, the authors investigate several machine learning techniques for wireless communications including deep learning, which represents a branch of artificial neural networks.


Author(s):  
Christos Floros ◽  
Panagiotis Ballas

Crises around the world reveal a generally unstable environment in the last decades within which banks and financial institutions operate. Risk is an inherent characteristic of financial institutions and is a multifaceted phenomenon. Everyday business practice involves decisions, which requires the use of information regarding various types of threats involved together with an evaluation of their impact on future performance, concluding to combinations of types of risks and projected returns for decision makers to choose from. Moreover, financial institutions process a massive amount of data, collected either internally or externally, in an effort to continuously analyse trends of the economy they operate in and decode global economic conditions. Even though research has been performed in the field of accounting and finance, the authors explore the application of machine learning techniques to facilitate decision making by top management of contemporary financial institutions improving the quality of their accounting disclosure.


Sign in / Sign up

Export Citation Format

Share Document