scholarly journals Application of artificially intelligent systems for the identification of discrete fossiliferous levels

PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e8767
Author(s):  
David M. Martín-Perea ◽  
Lloyd A. Courtenay ◽  
M. Soledad Domingo ◽  
Jorge Morales

The separation of discrete fossiliferous levels within an archaeological or paleontological site with no clear stratigraphic horizons has historically been carried out using qualitative approaches, relying on two-dimensional transversal and longitudinal projection planes. Analyses of this type, however, can often be conditioned by subjectivity based on the perspective of the analyst. This study presents a novel use of Machine Learning algorithms for pattern recognition techniques in the automated separation and identification of fossiliferous levels. This approach can be divided into three main steps including: (1) unsupervised Machine Learning for density based clustering (2) expert-in-the-loop Collaborative Intelligence Learning for the integration of geological data followed by (3) supervised learning for the final fine-tuning of fossiliferous level models. For evaluation of these techniques, this method was tested in two Late Miocene sites of the Batallones Butte paleontological complex (Madrid, Spain). Here we show Machine Learning analyses to be a valuable tool for the processing of spatial data in an efficient and quantitative manner, successfully identifying the presence of discrete fossiliferous levels in both Batallones-3 and Batallones-10. Three discrete fossiliferous levels have been identified in Batallones-3, whereas another three have been differentiated in Batallones-10.

Author(s):  
Namrata Dhanda ◽  
Stuti Shukla Datta ◽  
Mudrika Dhanda

Human intelligence is deeply involved in creating efficient and faster systems that can work independently. Creation of such smart systems requires efficient training algorithms. Thus, the aim of this chapter is to introduce the readers with the concept of machine learning and the commonly employed learning algorithm for developing efficient and intelligent systems. The chapter gives a clear distinction between supervised and unsupervised learning methods. Each algorithm is explained with the help of suitable example to give an insight to the learning process.


2019 ◽  
Vol 406 ◽  
pp. 109-120 ◽  
Author(s):  
Patrick Schratz ◽  
Jannes Muenchow ◽  
Eugenia Iturritxa ◽  
Jakob Richter ◽  
Alexander Brenning

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jian Jiang ◽  
Fen Zhang

As the planet watches in shock the evolution of the COVID-19 pandemic, new forms of sophisticated, versatile, and extremely difficult-to-detect malware expose society and especially the global economy. Machine learning techniques are posing an increasingly important role in the field of malware identification and analysis. However, due to the complexity of the problem, the training of intelligent systems proves to be insufficient in recognizing advanced cyberthreats. The biggest challenge in information systems security using machine learning methods is to understand the polymorphism and metamorphism mechanisms used by malware developers and how to effectively address them. This work presents an innovative Artificial Evolutionary Fuzzy LSTM Immune System which, by using a heuristic machine learning method that combines evolutionary intelligence, Long-Short-Term Memory (LSTM), and fuzzy knowledge, proves to be able to adequately protect modern information system from Portable Executable Malware. The main innovation in the technical implementation of the proposed approach is the fact that the machine learning system can only be trained from raw bytes of an executable file to determine if the file is malicious. The performance of the proposed system was tested on a sophisticated dataset of high complexity, which emerged after extensive research on PE malware that offered us a realistic representation of their operating states. The high accuracy of the developed model significantly supports the validity of the proposed method. The final evaluation was carried out with in-depth comparisons to corresponding machine learning algorithms and it has revealed the superiority of the proposed immune system.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Morshedul Bari Antor ◽  
A. H. M. Shafayet Jamil ◽  
Maliha Mamtaz ◽  
Mohammad Monirujjaman Khan ◽  
Sultan Aljahdali ◽  
...  

Alzheimer’s disease has been one of the major concerns recently. Around 45 million people are suffering from this disease. Alzheimer’s is a degenerative brain disease with an unspecified cause and pathogenesis which primarily affects older people. The main cause of Alzheimer’s disease is Dementia, which progressively damages the brain cells. People lost their thinking ability, reading ability, and many more from this disease. A machine learning system can reduce this problem by predicting the disease. The main aim is to recognize Dementia among various patients. This paper represents the result and analysis regarding detecting Dementia from various machine learning models. The Open Access Series of Imaging Studies (OASIS) dataset has been used for the development of the system. The dataset is small, but it has some significant values. The dataset has been analyzed and applied in several machine learning models. Support vector machine, logistic regression, decision tree, and random forest have been used for prediction. First, the system has been run without fine-tuning and then with fine-tuning. Comparing the results, it is found that the support vector machine provides the best results among the models. It has the best accuracy in detecting Dementia among numerous patients. The system is simple and can easily help people by detecting Dementia among them.


Author(s):  
D. Krivoguz ◽  
◽  
R. Borovskaya ◽  

This research has been aimed at finding the possibilities for application of the linear regression models, as a part of the machine learning methods, in visual representation of the spatial patterns of Artemia salina distribution in the Southern Sivash. Development of such models allows for estimation of A. salina biomass in water bodies with high accuracy. For investigation of maximum absorption levels in different parts of the light spectrum, spectral signatures at all the monitoring stations have been compared with the satellite data, and the analysis of the absorption spectra for astaxanthin and hemoglobin has been conducted with a spectrophotometer. As a result, Sentinel-2 satellite looks very promising as a key spatial data provider that can be of major help in increasing the frequency of A. salina monitoring in the Southern Sivash. The linear regression models, fitted by the third and the fourth degree polynomials, have shown satisfactory results, suitable for their subsequent use in fisheries. On the other hand, it should be noted that these models are slightly prone to overfitting, which to some extent can distort further forecasts feeding upon the new data. In turn, linear regression models fitted by a polynomial of the first degree show less accurate results, but their advantages include the lack of tendency to overfit. It is also worth noting that small-sized datasets within the scope of this investigation do not appear to be problematic, and simple machine learning algorithms can provide good accuracy results, which are suitable for further application in this field.


2021 ◽  
Author(s):  
Nitin Johri ◽  
Nimish Pandey ◽  
Sanket Kadam ◽  
Sanjeev Vermani ◽  
Shubham Agarwal ◽  
...  

Abstract Data monitoring in remote satellite field without any DOF platform is a challenging task but critical for ALS monitoring and optimization. In SRP wells the VFD data collection is important for analysis of downhole pump behavior and system health. SRP maintenance crew collects data from VFDs daily, but it is time consuming and can target only few wells in a day. The steps from requirement of dyna to final decision taken for ALS optimization are mobilizing team, permits approvals, download data, e-mail dynacards, dyna visualization, final decision. The problems with above process were: - Insufficient and discrete data for any post-failure analysis or ALS-optimization Minimal data to investigate the pre failure events The lack of real time monitoring was resulting in well downtime and associated production loss. The combination of IOT, Cloud Computing and Machine learning was implemented to shift from the reactive to proactive approach which helped in ALS Optimization and reduced production loss. The data was transmitted to a Cloud server and further it was transmitted to web-based app. Since thousands of Dynacards are generated in a day, hence it requires automated classification using computer driven pattern recognition techniques. The real time data is used for analysis involving basic statistic and Machine learning algorithms. The critical pump signatures were identified using machine learning libraries and email is generated for immediate action. Several informative dashboards were developed which provide quick analysis of ALS performance. The types of dashboard are as below Well Operational Status Dynacards Interpretation module SRP parameters visualization Machine Learning model calibration module Pump Performance Statistics After collection of enough data and creation of analytical dashboards on the three wells using domain knowledge the gained insights were used for ALS optimization. To keep the model in an evergreen high-confidence prediction state, inputs from domain experts are often required. After regular fine-tuning the prediction accuracy of the ML model increased to 80-85 %. In addition, system was made flexible so that a new algorithm can be deployed when required. Smart Alarms were generated involving statistic and Machine Learning by the system which gives alerts by e-mail if an abnormal behavior or erratic dynacards were identified. This helped in reduction of well downtime in some events which were treated instinctively before. The integration of domain knowledge and digitalization enables an engineer to take informed and effective decisions. The techniques discussed above can be implemented in marginal fields where DOF implementation is logistically and economically challenged. EDGE along with advanced analytics will gain more technological advances and can be used in other potential domains as well in near future.


2020 ◽  
Author(s):  
Victoria Da Poian ◽  
Eric Lyness ◽  
Melissa Trainer ◽  
Xiang Li ◽  
William Brinckerhoff ◽  
...  

<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>The majority of planetary missions return only one thing: data. The volume of data returned from distant planets is typically minuscule compared to Earth-based investigations, volume decreasing further from more distant solar system missions. Meanwhile, the data produced by planetary science instruments continue to grow along with mission ambitions. Moreover, the time required for decisional data to reach science and operations teams on Earth, and for commands to be sent, also increases with distance. To maximize the value of each bit, within these mission time and volume constraints, instruments need to be selective about what they send back to Earth. We envision instruments that analyze science data onboard, such that they can adjust and tune themselves, select the next operations to be run without requiring ground-in-the-loop, and transmit home only the most interesting or time-critical data.</p> <p>Recent developments have demonstrated the tremendous potential of robotic explorers for planetary exploration and for other extreme environments. We believe that science autonomy has the potential to be as important as robotic autonomy (e.g., roving terrain) in improving the science potential of these missions because it directly optimizes the returned data. On- board science data processing, interpretation, and reaction, as well as prioritization of telemetry, therefore, comprise new, critical challenges of mission design.</p> <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>We present a first step toward this vision: a machine learning (ML) approach for analyzing science data from the Mars Organic Molecule Analyzer (MOMA) instrument, which will land on Mars within the ExoMars rover Rosalind Franklin in 2023. MOMA is a dual-source (laser desorption and gas chromatograph) mass spectrometer that will search for past or present life on the Martian surface and subsurface through analysis of soil samples. We use data collected from the MOMA flight-like engineering model to develop mass-spectrometry- focused machine learning techniques. We first apply unsupervised algorithms in order to cluster input data based on inherent patterns and separate the bulk data into clusters. Then, optimized classification algorithms designed for MOMA’s scientific goals provide information to the scientists about the likely content of the sample. This will help the scientists with their analysis of the sample and decision-making process regarding subsequent operations.</p> <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>We used MOMA data to develop initial machine learning algorithms and strategies as a proof of concept and to design software to support intelligent operations of more autonomous systems in development for future exploratory missions. This data characterization and categorization is the first step of a longer-term objective to enable the spacecraft and instruments themselves to make real-time adjustments during operations, thus optimizing the potentially complex search for life in our solar system and beyond.</p> <p> </p> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div>


2020 ◽  
Vol 102-B (6_Supple_A) ◽  
pp. 101-106
Author(s):  
Romil F. Shah ◽  
Stefano A. Bini ◽  
Alejandro M. Martinez ◽  
Valentina Pedoia ◽  
Thomas P. Vail

Aims The aim of this study was to evaluate the ability of a machine-learning algorithm to diagnose prosthetic loosening from preoperative radiographs and to investigate the inputs that might improve its performance. Methods A group of 697 patients underwent a first-time revision of a total hip (THA) or total knee arthroplasty (TKA) at our institution between 2012 and 2018. Preoperative anteroposterior (AP) and lateral radiographs, and historical and comorbidity information were collected from their electronic records. Each patient was defined as having loose or fixed components based on the operation notes. We trained a series of convolutional neural network (CNN) models to predict a diagnosis of loosening at the time of surgery from the preoperative radiographs. We then added historical data about the patients to the best performing model to create a final model and tested it on an independent dataset. Results The convolutional neural network we built performed well when detecting loosening from radiographs alone. The first model built de novo with only the radiological image as input had an accuracy of 70%. The final model, which was built by fine-tuning a publicly available model named DenseNet, combining the AP and lateral radiographs, and incorporating information from the patient’s history, had an accuracy, sensitivity, and specificity of 88.3%, 70.2%, and 95.6% on the independent test dataset. It performed better for cases of revision THA with an accuracy of 90.1%, than for cases of revision TKA with an accuracy of 85.8%. Conclusion This study showed that machine learning can detect prosthetic loosening from radiographs. Its accuracy is enhanced when using highly trained public algorithms, and when adding clinical data to the algorithm. While this algorithm may not be sufficient in its present state of development as a standalone metric of loosening, it is currently a useful augment for clinical decision making. Cite this article: Bone Joint J 2020;102-B(6 Supple A):101–106.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1041
Author(s):  
Amirhosein Mosavi ◽  
Manouchehr Shokri ◽  
Zulkefli Mansor ◽  
Sultan Noman Qasem ◽  
Shahab S. Band ◽  
...  

In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.


2020 ◽  
Vol 2 (Supplement_3) ◽  
pp. ii1-ii1
Author(s):  
Manabu Kinoshita ◽  
Yoshitaka Narita ◽  
Yonehiro Kanemura ◽  
Haruhiko Kishima

Abstract Qualitative imaging, primarily focusing on brain tumors’ genetic alterations, has gained traction since the introduction of molecular-based diagnosis of gliomas. This trend started with fine-tuning MRS for detecting intracellular 2HG in IDH-mutant astrocytomas and further expanded into a novel research field named “radiomics”. Along with the explosive development of machine learning algorithms, radiomics became one of the most competitive research fields in neuro-oncology. However, one should be cautious in interpreting research achievements produced by radiomics as there is no “standard” set in this novel research field. For example, the method used for image feature extraction is different from research to research, and some utilize machine learning for image feature extraction while others do not. Furthermore, the types of images used for input vary among various research. Some restrict data input only for conventional anatomical MRI, while others could include diffusion-weighted or even perfusion-weighted images. Taken together, however, previous reports seem to support the conclusion that IDH mutation status can be predicted with 80 to 90% accuracy for lower-grade gliomas. In contrast, the prediction of MGMT promoter methylation status for glioblastoma is exceptionally challenging. Although we can see sound improvements in radiomics, there is still no clue when the daily clinical practice can incorporate this novel technology. Difficulty in generalizing the acquired prediction model to the external cohort is the major challenge in radiomics. This problem may derive from the fact that radiomics requires normalization of qualitative MR images to semi-quantitative images. Introducing “true” quantitative MR images to radiomics may be a key solution to this inherent problem.


Sign in / Sign up

Export Citation Format

Share Document