scholarly journals A supervised machine-learning method for optimizing the automatic transmission system of wind turbines

2022 ◽  
Vol 10 (1) ◽  
pp. 35-56 ◽  
Author(s):  
Habeeb A. H. R. Aladwani ◽  
Mohd Khairol Anuar Ariffin ◽  
Faizal Mustapha

Large-scale wind turbines mostly use Continuously Variable Transmission (CVT) as the transmission system, which is highly efficient. However, it comes with high complexity and cost too. In contrast, the small-scale wind turbines that are available in the market offer a one-speed gearing system only where no gear ratios are varied, resulting in low efficiency of harvesting energy and leading to gears failure. In this research, an unsupervised machine-learning algorithm is proposed to address the energy efficiency of the automatic transmission system in vertical axis wind turbines (VAWT), to increase its efficiency in harvesting energy. The aim is to find the best adjustment for VAWT while the automatic transmission system is taken into account. For this purpose, the system is simulated and tested under various gear ratios conditions while a centrifugal clutch is applied to automatic gear shifting. The outcomes indicated that the automatic transmission system could successfully adjust the spinning in line with the wind speed. As a result, the obtained level of harvested voltage and power by VAWT with the automatic transmission system are improved significantly. Consequently, it is concluded that automatic VAWTs, equipped with the machine-learning capability can readjust themselves with the wind speed more efficiently.

2021 ◽  
Author(s):  
Renan M Costa ◽  
Vijay A Dharmaraj ◽  
Ryota Homma ◽  
Curtis L Neveu ◽  
William B Kristan ◽  
...  

A major limitation of large-scale neuronal recordings is the difficulty in locating the same neuron in different subjects, referred to as the "correspondence" issue. This issue stems, at least in part, from the lack of a unique feature that unequivocally identifies each neuron. One promising approach to this problem is the functional neurocartography framework developed by Frady et al. (2016), in which neurons are identified by a semi-supervised machine learning algorithm using a combination of multiple selected features. Here, the framework was adapted to the buccal ganglia of Aplysia. Multiple features were derived from neuronal activity during motor pattern generation, responses to peripheral nerve stimulation, and the spatial properties of each cell. The feature set was optimized based on its potential usefulness in discriminating neurons from each other, and then used to match putatively homologous neurons across subjects with the functional neurocartography software. A matching method was developed based on a cyclic matching algorithm that allows for unsupervised extraction of groups of neurons, thereby enhancing scalability of the analysis. Cyclic matching was also used to automate the selection of high-quality matches, which allowed for unsupervised implementation of the machine learning algorithm. This study paves the way for investigating the roles of both well-characterized and previously uncharacterized neurons in Aplysia, as well as helps to adapt this framework to other systems.


Author(s):  
Loretta H. Cheeks ◽  
Tracy L. Stepien ◽  
Dara M. Wald ◽  
Ashraf Gaffar

The Internet is a major source of online news content. Current efforts to evaluate online news content including text, storyline, and sources is limited by the use of small-scale manual techniques that are time consuming and dependent on human judgments. This article explores the use of machine learning algorithms and mathematical techniques for Internet-scale data mining and semantic discovery of news content that will enable researchers to mine, analyze, and visualize large-scale datasets. This research has the potential to inform the integration and application of data mining to address real-world socio-environmental issues, including water insecurity in the Southwestern United States. This paper establishes a formal definition of framing and proposes an approach for the discovery of distinct patterns that characterize prominent frames. The authors' experimental evaluation shows the proposed process is an effective approach for advancing semi-supervised machine learning and may assist in advancing tools for making sense of unstructured text.


2019 ◽  
Vol 23 (1) ◽  
pp. 12-21 ◽  
Author(s):  
Shikha N. Khera ◽  
Divya

Information technology (IT) industry in India has been facing a systemic issue of high attrition in the past few years, resulting in monetary and knowledge-based loses to the companies. The aim of this research is to develop a model to predict employee attrition and provide the organizations opportunities to address any issue and improve retention. Predictive model was developed based on supervised machine learning algorithm, support vector machine (SVM). Archival employee data (consisting of 22 input features) were collected from Human Resource databases of three IT companies in India, including their employment status (response variable) at the time of collection. Accuracy results from the confusion matrix for the SVM model showed that the model has an accuracy of 85 per cent. Also, results show that the model performs better in predicting who will leave the firm as compared to predicting who will not leave the company.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3598
Author(s):  
Sara Russo ◽  
Pasquale Contestabile ◽  
Andrea Bardazzi ◽  
Elisa Leone ◽  
Gregorio Iglesias ◽  
...  

New large-scale laboratory data are presented on a physical model of a spar buoy wind turbine with angular motion of control surfaces implemented (pitch control). The peculiarity of this type of rotating blade represents an essential aspect when studying floating offshore wind structures. Experiments were designed specifically to compare different operational environmental conditions in terms of wave steepness and wind speed. Results discussed here were derived from an analysis of only a part of the whole dataset. Consistent with recent small-scale experiments, data clearly show that the waves contributed to most of the model motions and mooring loads. A significant nonlinear behavior for sway, roll and yaw has been detected, whereas an increase in the wave period makes the wind speed less influential for surge, heave and pitch. In general, as the steepness increases, the oscillations decrease. However, higher wind speed does not mean greater platform motions. Data also indicate a significant role of the blade rotation in the turbine thrust, nacelle dynamic forces and power in six degrees of freedom. Certain pairs of wind speed-wave steepness are particularly unfavorable, since the first harmonic of the rotor (coupled to the first wave harmonic) causes the thrust force to be larger than that in more energetic sea states. The experiments suggest that the inclusion of pitch-controlled, variable-speed blades in physical (and numerical) tests on such types of structures is crucial, highlighting the importance of pitch motion as an important design factor.


2021 ◽  
Vol 11 (2) ◽  
pp. 472
Author(s):  
Hyeongmin Cho ◽  
Sangkyun Lee

Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.


Friction ◽  
2021 ◽  
Author(s):  
Vigneashwara Pandiyan ◽  
Josef Prost ◽  
Georg Vorlaufer ◽  
Markus Varga ◽  
Kilian Wasmer

AbstractFunctional surfaces in relative contact and motion are prone to wear and tear, resulting in loss of efficiency and performance of the workpieces/machines. Wear occurs in the form of adhesion, abrasion, scuffing, galling, and scoring between contacts. However, the rate of the wear phenomenon depends primarily on the physical properties and the surrounding environment. Monitoring the integrity of surfaces by offline inspections leads to significant wasted machine time. A potential alternate option to offline inspection currently practiced in industries is the analysis of sensors signatures capable of capturing the wear state and correlating it with the wear phenomenon, followed by in situ classification using a state-of-the-art machine learning (ML) algorithm. Though this technique is better than offline inspection, it possesses inherent disadvantages for training the ML models. Ideally, supervised training of ML models requires the datasets considered for the classification to be of equal weightage to avoid biasing. The collection of such a dataset is very cumbersome and expensive in practice, as in real industrial applications, the malfunction period is minimal compared to normal operation. Furthermore, classification models would not classify new wear phenomena from the normal regime if they are unfamiliar. As a promising alternative, in this work, we propose a methodology able to differentiate the abnormal regimes, i.e., wear phenomenon regimes, from the normal regime. This is carried out by familiarizing the ML algorithms only with the distribution of the acoustic emission (AE) signals captured using a microphone related to the normal regime. As a result, the ML algorithms would be able to detect whether some overlaps exist with the learnt distributions when a new, unseen signal arrives. To achieve this goal, a generative convolutional neural network (CNN) architecture based on variational auto encoder (VAE) is built and trained. During the validation procedure of the proposed CNN architectures, we were capable of identifying acoustics signals corresponding to the normal and abnormal wear regime with an accuracy of 97% and 80%. Hence, our approach shows very promising results for in situ and real-time condition monitoring or even wear prediction in tribological applications.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


2017 ◽  
Vol 139 (5) ◽  
Author(s):  
Sara Benyakhlef ◽  
Ahmed Al Mers ◽  
Ossama Merroun ◽  
Abdelfattah Bouatem ◽  
Hamid Ajdad ◽  
...  

Reducing levelized electricity costs of concentrated solar power (CSP) plants can be of great potential in accelerating the market penetration of these sustainable technologies. Linear Fresnel reflectors (LFRs) are one of these CSP technologies that may potentially contribute to such cost reduction. However, due to very little previous research, LFRs are considered as a low efficiency technology. In this type of solar collectors, there is a variety of design approaches when it comes to optimizing such systems. The present paper aims to tackle a new research axis based on variability study of heliostat curvature as an approach for optimizing small and large-scale LFRs. Numerical investigations based on a ray tracing model have demonstrated that LFR constructors should adopt a uniform curvature for small-scale LFRs and a variable curvature per row for large-scale LFRs. Better optical performances were obtained for LFRs regarding these adopted curvature types. An optimization approach based on the use of uniform heliostat curvature for small-scale LFRs has led to a system cost reduction by means of reducing its receiver surface and height.


Sign in / Sign up

Export Citation Format

Share Document