scholarly journals Plain Language Assessment of Statutes

Author(s):  
Wolfgang Alschner ◽  
Daniel D’Alimonte ◽  
Giovanni C. Giuga ◽  
Sophie Gadbois

Legislative drafters use plain language drafting techniques to increase the readability of statutes in several Anglo-American jurisdictions. Existing readability metrics, such as Flesch-Kincaid, however, are a poor proxy for how effectively drafters incorporate these guidelines. This paper proposes a rules-based operationalization of the literature’s readability measures and tests them on legislation that underwent plain language rewriting. The results suggest that our readability metrics provide a more holistic representation of a statute’s readability compared to traditional techniques. Future machine-learning classifications promise to further improve the detection of complex features, such as nominalizations.


2019 ◽  
Vol 5 (12) ◽  
pp. eaay6946 ◽  
Author(s):  
Tyler W. Hughes ◽  
Ian A. D. Williamson ◽  
Momchil Minkov ◽  
Shanhui Fan

Analog machine learning hardware platforms promise to be faster and more energy efficient than their digital counterparts. Wave physics, as found in acoustics and optics, is a natural candidate for building analog processors for time-varying signals. Here, we identify a mapping between the dynamics of wave physics and the computation in recurrent neural networks. This mapping indicates that physical wave systems can be trained to learn complex features in temporal data, using standard training techniques for neural networks. As a demonstration, we show that an inverse-designed inhomogeneous medium can perform vowel classification on raw audio signals as their waveforms scatter and propagate through it, achieving performance comparable to a standard digital implementation of a recurrent neural network. These findings pave the way for a new class of analog machine learning platforms, capable of fast and efficient processing of information in its native domain.



2020 ◽  
Vol 8 ◽  
pp. 247-263 ◽  
Author(s):  
Burr Settles ◽  
Geoffrey T. LaFlair ◽  
Masato Hagiwara

We describe a method for rapidly creating language proficiency assessments, and provide experimental evidence that such tests can be valid, reliable, and secure. Our approach is the first to use machine learning and natural language processing to induce proficiency scales based on a given standard, and then use linguistic models to estimate item difficulty directly for computer-adaptive testing. This alleviates the need for expensive pilot testing with human subjects. We used these methods to develop an online proficiency exam called the Duolingo English Test, and demonstrate that its scores align significantly with other high-stakes English assessments. Furthermore, our approach produces test scores that are highly reliable, while generating item banks large enough to satisfy security requirements.



2019 ◽  
Vol 9 (1) ◽  
pp. I
Author(s):  
Gopi Aryal

Artificial intelligence (AI) is machine intelligence that mimics human cognitive function. It denotes the intelligence presented by some artificial entities including computers and robots.  In supervised learning, a machine is trained with data that contain pairs of inputs and outputs. In unsupervised learning, machines are given data inputs that are not explicitly programmed.1 Machine learning refines a model that predicts outputs using sample inputs (features) and a feedback loop. It relies heavily on extracting or selecting salient features, which is a combination of art and science (“feature engineering”).  A subset of feature learning is deep learning, which harnesses neural networks modeled after the biological nervous system of animals. Deep learning discovers the features from the raw data provided during training. Hidden layers in the artificial neural network represent increasingly more complex features in the data. Convolutional neural network is a type of deep learning commonly used for image analysis.



Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 134
Author(s):  
Yeonwoo Jeong ◽  
Yu-Jin Hong ◽  
Jae-Ho Han

Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.



2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.



1975 ◽  
Vol 6 (1) ◽  
pp. 24-28
Author(s):  
Shirley A. Nelson-Burgess ◽  
Marion D. Meyerson


1983 ◽  
Vol 14 (1) ◽  
pp. 7-21 ◽  
Author(s):  
Robert E. Owens ◽  
Martha J. Haney ◽  
Virginia E. Giesow ◽  
Lisa F. Dooley ◽  
Richard J. Kelly

This paper examines the test item content of several language assessment tools. A comparison of test breadth and depth is presented. The resultant information provides a diagnostic aid for school speech-language pathologists.



1985 ◽  
Vol 16 (4) ◽  
pp. 244-255
Author(s):  
Penelope K. Hall ◽  
Linda S. Jordan

The performance of 123 language-disordered children on the DeRenzi and Faglioni form of the Token Test and the DeRenzi and Ferrari Reporter's Test were analyzed using two scoring conventions, and then compared with the performances of children with presumed normal language development. Correlations with other commonly used language assessment instruments are cited. Use of the Token and Reporter's Tests with children exhibiting language disorders is suggested.



Sign in / Sign up

Export Citation Format

Share Document