scholarly journals AI Augmented Approach to Identify Shared Ideas from Large Format Public Consultation

2021 ◽  
Vol 13 (16) ◽  
pp. 9310
Author(s):  
Min-Hsien Weng ◽  
Shaoqun Wu ◽  
Mark Dyer

Public data, contributed by citizens, stakeholders and other potentially affected parties, are becoming increasingly used to collect the shared ideas of a wider community. Having collected large quantities of text data from public consultation, the challenge is often how to interpret the dataset without resorting to lengthy time-consuming manual analysis. One approach gaining ground is the use of Natural Language Processing (NLP) technologies. Based on machine learning technology applied to analysis of human natural languages, NLP provides the opportunity to automate data analysis for large volumes of texts at a scale that would be virtually impossible to analyse manually. Using NLP toolkits, this paper presents a novel approach for identifying and visualising shared ideas from large format public consultation. The approach analyses grammatical structures of public texts to discover shared ideas from sentences comprising subject + verb + object and verb + object that express public options. In particular, the shared ideas are identified by extracting noun, verb, adjective phrases and clauses from subjects and objects, which are then categorised by urban infrastructure categories and terms. The results are visualised in a hierarchy chart and a word tree using cascade and tree views. The approach is illustrated using data collected from a public consultation exercise called “Share an Idea” undertaken in Christchurch, New Zealand, after the 2011 earthquake. The approach has the potential to upscale public participation to identify shared design values and associated qualities for a wide range of public initiatives including urban planning.

2021 ◽  
Vol 11 (18) ◽  
pp. 8464
Author(s):  
Adam L. Kaczmarek ◽  
Bernhard Blaschitz

This paper presents research on 3D scanning by taking advantage of a camera array consisting of up to five adjacent cameras. Such an array makes it possible to make a disparity map with a higher precision than a stereo camera, however it preserves the advantages of a stereo camera such as a possibility to operate in wide range of distances and in highly illuminated areas. In an outdoor environment, the array is a competitive alternative to other 3D imaging equipment such as Structured-light 3D scanners or Light Detection and Ranging (LIDAR). The considered kinds of arrays are called Equal Baseline Camera Array (EBCA). This paper presents a novel approach to calibrating the array based on the use of self-calibration methods. This paper also introduces a testbed which makes it possible to develop new algorithms for obtaining 3D data from images taken by the array. The testbed was released under open-source. Moreover, this paper shows new results of using these arrays with different stereo matching algorithms including an algorithm based on a convolutional neural network and deep learning technology.


2019 ◽  
Vol 35 (23) ◽  
pp. 4979-4985 ◽  
Author(s):  
Woosung Jeon ◽  
Dongsup Kim

Abstract Motivation One of the most successful methods for predicting the properties of chemical compounds is the quantitative structure–activity relationship (QSAR) methods. The prediction accuracy of QSAR models has recently been greatly improved by employing deep learning technology. Especially, newly developed molecular featurizers based on graph convolution operations on molecular graphs significantly outperform the conventional extended connectivity fingerprints (ECFP) feature in both classification and regression tasks, indicating that it is critical to develop more effective new featurizers to fully realize the power of deep learning techniques. Motivated by the fact that there is a clear analogy between chemical compounds and natural languages, this work develops a new molecular featurizer, FP2VEC, which represents a chemical compound as a set of trainable embedding vectors. Results To implement and test our new featurizer, we build a QSAR model using a simple convolutional neural network (CNN) architecture that has been successfully used for natural language processing tasks such as sentence classification task. By testing our new method on several benchmark datasets, we demonstrate that the combination of FP2VEC and CNN model can achieve competitive results in many QSAR tasks, especially in classification tasks. We also demonstrate that the FP2VEC model is especially effective for multitask learning. Availability and implementation FP2VEC is available from https://github.com/wsjeon92/FP2VEC. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Ning Wang ◽  
Can Wang ◽  
Limin Hou ◽  
Bing Fang

Understanding stressors is an effective measure to decrease employee stress and improve employee mental health. The extant literature mainly focuses on a singular stressor among various aspects of their work or life. In addition, the extant literature generally uses questionnaires or interviews to obtain data. Data obtained in such ways are often subjective and lack authenticity. We propose a novel machine–human hybrid approach to conduct qualitative content analysis of user-generated online content to explore the stressors of young employees in contemporary society. The user-generated online contents were collected from a famous Q&A platform in China and we adopted natural language processing and deep learning technology to discover knowledge. Our results identified three kinds of new stressors, that is, affection from leaders, affection from the social circle, and the gap between dream and reality. These new identified stressors were due to the lack of social security and regulation, frequent occurrences of social media fearmongering, and subjective cognitive bias, respectively. In light of our findings, we offer valuable practical insights and policy recommendations to relieve stress and improve mental health of young employees. The primary contributions of our work are two-fold, as follows. First, we propose a novel approach to explore the stressors of young employees in contemporary society, which is applicable not only in China, but also in other countries and regions. Second, we expand the scope of job demands-resources (JD-R) theory, which is an important framework for the classification of employee stressors.


2018 ◽  
Vol 7 (3.3) ◽  
pp. 206
Author(s):  
V Sumalatha ◽  
Dr R.Santhi

Machine learning plays a key role in a wide range of applications such as data mining, natural language processing and expert systems. It provides a solution in all domains for further development when large data is applied. Supervised learning is consist of mathematical algorithm to optimize the functions with given inputs. Machine learning solves problems that cannot be solved by numerical values. In this research paper, a model is developed to improve classification algorithm using anxiety of juvenile. Prediction and classification are made using data. A machine learning tool is used for pre-processing and first level of model is data preparation and ranking prototype used for filtration of data . Then Probabilistic estimation hypothesis is to find the hypothesis value based on statistical functions and classification of anxiety predictor model is used for prediction and classification. Comparison of Algorithm and experimental are done using machine learning software. According to the experiment, the model is more efficient and accurate compared with other classification algorithm as results shown.  


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Shuang Li ◽  
Baoguo Yu ◽  
Yi Jin ◽  
Lu Huang ◽  
Heng Zhang ◽  
...  

With the increasing demand for location-based services such as railway stations, airports, and shopping malls, indoor positioning technology has become one of the most attractive research areas. Due to the effects of multipath propagation, wireless-based indoor localization methods such as WiFi, bluetooth, and pseudolite have difficulty achieving high precision position. In this work, we present an image-based localization approach which can get the position just by taking a picture of the surrounding environment. This paper proposes a novel approach which classifies different scenes based on deep belief networks and solves the camera position with several spatial reference points extracted from depth images by the perspective- n -point algorithm. To evaluate the performance, experiments are conducted on public data and real scenes; the result demonstrates that our approach can achieve submeter positioning accuracy. Compared with other methods, image-based indoor localization methods do not require infrastructure and have a wide range of applications that include self-driving, robot navigation, and augmented reality.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-23
Author(s):  
Markku Laine ◽  
Yu Zhang ◽  
Simo Santala ◽  
Jussi P. P. Jokinen ◽  
Antti Oulasvirta

Over the past decade, responsive web design (RWD) has become the de facto standard for adapting web pages to a wide range of devices used for browsing. While RWD has improved the usability of web pages, it is not without drawbacks and limitations: designers and developers must manually design the web layouts for multiple screen sizes and implement associated adaptation rules, and its "one responsive design fits all" approach lacks support for personalization. This paper presents a novel approach for automated generation of responsive and personalized web layouts. Given an existing web page design and preferences related to design objectives, our integer programming -based optimizer generates a consistent set of web designs. Where relevant data is available, these can be further automatically personalized for the user and browsing device. The paper includes presentation of techniques for runtime adaptation of the designs generated into a fully responsive grid layout for web browsing. Results from our ratings-based online studies with end users (N = 86) and designers (N = 64) show that the proposed approach can automatically create high-quality responsive web layouts for a variety of real-world websites.


2020 ◽  
Vol 11 (1) ◽  
pp. 24
Author(s):  
Jin Tao ◽  
Kelly Brayton ◽  
Shira Broschat

Advances in genome sequencing technology and computing power have brought about the explosive growth of sequenced genomes in public repositories with a concomitant increase in annotation errors. Many protein sequences are annotated using computational analysis rather than experimental verification, leading to inaccuracies in annotation. Confirmation of existing protein annotations is urgently needed before misannotation becomes even more prevalent due to error propagation. In this work we present a novel approach for automatically confirming the existence of manually curated information with experimental evidence of protein annotation. Our ensemble learning method uses a combination of recurrent convolutional neural network, logistic regression, and support vector machine models. Natural language processing in the form of word embeddings is used with journal publication titles retrieved from the UniProtKB database. Importantly, we use recall as our most significant metric to ensure the maximum number of verifications possible; results are reported to a human curator for confirmation. Our ensemble model achieves 91.25% recall, 71.26% accuracy, 65.19% precision, and an F1 score of 76.05% and outperforms the Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT) model with fine-tuning using the same data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zekun Xu ◽  
Eric Laber ◽  
Ana-Maria Staicu ◽  
B. Duncan X. Lascelles

AbstractOsteoarthritis (OA) is a chronic condition often associated with pain, affecting approximately fourteen percent of the population, and increasing in prevalence. A globally aging population have made treating OA-associated pain as well as maintaining mobility and activity a public health priority. OA affects all mammals, and the use of spontaneous animal models is one promising approach for improving translational pain research and the development of effective treatment strategies. Accelerometers are a common tool for collecting high-frequency activity data on animals to study the effects of treatment on pain related activity patterns. There has recently been increasing interest in their use to understand treatment effects in human pain conditions. However, activity patterns vary widely across subjects; furthermore, the effects of treatment may manifest in higher or lower activity counts or in subtler ways like changes in the frequency of certain types of activities. We use a zero inflated Poisson hidden semi-Markov model to characterize activity patterns and subsequently derive estimators of the treatment effect in terms of changes in activity levels or frequency of activity type. We demonstrate the application of our model, and its advance over traditional analysis methods, using data from a naturally occurring feline OA-associated pain model.


BMJ Open ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. e047007
Author(s):  
Mari Terada ◽  
Hiroshi Ohtsu ◽  
Sho Saito ◽  
Kayoko Hayakawa ◽  
Shinya Tsuzuki ◽  
...  

ObjectivesTo investigate the risk factors contributing to severity on admission. Additionally, risk factors of worst severity and fatality were studied. Moreover, factors were compared based on three points: early severity, worst severity and fatality.DesignAn observational cohort study using data entered in a Japan nationwide COVID-19 inpatient registry, COVIREGI-JP.SettingAs of 28 September 2020, 10480 cases from 802 facilities have been registered. Participating facilities cover a wide range of hospitals where patients with COVID-19 are admitted in Japan.ParticipantsParticipants who had a positive test result on any applicable SARS-CoV-2 diagnostic tests were admitted to participating healthcare facilities. A total of 3829 cases were identified from 16 January to 31 May 2020, of which 3376 cases were included in this study.Primary and secondary outcome measuresPrimary outcome was severe or nonsevere on admission, determined by the requirement of mechanical ventilation or oxygen therapy, SpO2 or respiratory rate. Secondary outcome was the worst severity during hospitalisation, judged by the requirement of oxygen and/orinvasive mechanical ventilation/extracorporeal membrane oxygenation.ResultsRisk factors for severity on admission were older age, men, cardiovascular disease, chronic respiratory disease, diabetes, obesity and hypertension. Cerebrovascular disease, liver disease, renal disease or dialysis, solid tumour and hyperlipidaemia did not influence severity on admission; however, it influenced worst severity. Fatality rates for obesity, hypertension and hyperlipidaemia were relatively lower.ConclusionsThis study segregated the comorbidities influencing severity and death. It is possible that risk factors for severity on admission, worst severity and fatality are not consistent and may be propelled by different factors. Specifically, while hypertension, hyperlipidaemia and obesity had major effect on worst severity, their impact was mild on fatality in the Japanese population. Some studies contradict our results; therefore, detailed analyses, considering in-hospital treatments, are needed for validation.Trial registration numberUMIN000039873. https://upload.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000045453


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bo Sun ◽  
Fei Zhang ◽  
Jing Li ◽  
Yicheng Yang ◽  
Xiaolin Diao ◽  
...  

Abstract Background With the development and application of medical information system, semantic interoperability is essential for accurate and advanced health-related computing and electronic health record (EHR) information sharing. The openEHR approach can improve semantic interoperability. One key improvement of openEHR is that it allows for the use of existing archetypes. The crucial problem is how to improve the precision and resolve ambiguity in the archetype retrieval. Method Based on the query expansion technology and Word2Vec model in Nature Language Processing (NLP), we propose to find synonyms as substitutes for original search terms in archetype retrieval. Test sets in different medical professional level are used to verify the feasibility. Result Applying the approach to each original search term (n = 120) in test sets, a total of 69,348 substitutes were constructed. Precision at 5 (P@5) was improved by 0.767, on average. For the best result, the P@5 was up to 0.975. Conclusions We introduce a novel approach that using NLP technology and corpus to find synonyms as substitutes for original search terms. Compared to simply mapping the element contained in openEHR to an external dictionary, this approach could greatly improve precision and resolve ambiguity in retrieval tasks. This is helpful to promote the application of openEHR and advance EHR information sharing.


Sign in / Sign up

Export Citation Format

Share Document