scholarly journals Predictive Modeling of a Caddo Structure in the Ouachita Mountains, Montgomery Counts, Arkansas

Author(s):  
Vanessa N. Hanvey

During the Arkansas Archaeological Survey/Society Training Program in June, 2014, an arc of postmolds from a Caddo structure was uncovered at site 3MN298. The site is located in Montgomery County, Arkansas, within the Ouachita National Forest, on a bend of the Ouachita River. After reviewing the literature on Caddo architecture, an attempt was made to predict the size and shape of the building as well as the location of any associated features. In September, 2013, a small field crew effectively ground-truthed the model. This article explores the process of creating and using a predictive model to guide archaeological excavations of Caddo period structure and presents the results of this endeavor.

2021 ◽  
pp. 219256822110193
Author(s):  
Kevin Y. Wang ◽  
Ijezie Ikwuezunma ◽  
Varun Puvanesarajah ◽  
Jacob Babu ◽  
Adam Margalit ◽  
...  

Study Design: Retrospective review. Objective: To use predictive modeling and machine learning to identify patients at risk for venous thromboembolism (VTE) following posterior lumbar fusion (PLF) for degenerative spinal pathology. Methods: Patients undergoing single-level PLF in the inpatient setting were identified in the National Surgical Quality Improvement Program database. Our outcome measure of VTE included all patients who experienced a pulmonary embolism and/or deep venous thrombosis within 30-days of surgery. Two different methodologies were used to identify VTE risk: 1) a novel predictive model derived from multivariable logistic regression of significant risk factors, and 2) a tree-based extreme gradient boosting (XGBoost) algorithm using preoperative variables. The methods were compared against legacy risk-stratification measures: ASA and Charlson Comorbidity Index (CCI) using area-under-the-curve (AUC) statistic. Results: 13, 500 patients who underwent single-level PLF met the study criteria. Of these, 0.95% had a VTE within 30-days of surgery. The 5 clinical variables found to be significant in the multivariable predictive model were: age > 65, obesity grade II or above, coronary artery disease, functional status, and prolonged operative time. The predictive model exhibited an AUC of 0.716, which was significantly higher than the AUCs of ASA and CCI (all, P < 0.001), and comparable to that of the XGBoost algorithm ( P > 0.05). Conclusion: Predictive analytics and machine learning can be leveraged to aid in identification of patients at risk of VTE following PLF. Surgeons and perioperative teams may find these tools useful to augment clinical decision making risk stratification tool.


2019 ◽  
Vol 9 (10) ◽  
pp. 2048 ◽  
Author(s):  
Jin Li

Spatial predictive methods are increasingly being used to generate predictions across various disciplines in environmental sciences. Accuracy of the predictions is critical as they form the basis for environmental management and conservation. Therefore, improving the accuracy by selecting an appropriate method and then developing the most accurate predictive model(s) is essential. However, it is challenging to select an appropriate method and find the most accurate predictive model for a given dataset due to many aspects and multiple factors involved in the modeling process. Many previous studies considered only a portion of these aspects and factors, often leading to sub-optimal or even misleading predictive models. This study evaluates a spatial predictive modeling process, and identifies nine major components for spatial predictive modeling. Each of these nine components is then reviewed, and guidelines for selecting and applying relevant components and developing accurate predictive models are provided. Finally, reproducible examples using spm, an R package, are provided to demonstrate how to select and develop predictive models using machine learning, geostatistics, and their hybrid methods according to predictive accuracy for spatial predictive modeling; reproducible examples are also provided to generate and visualize spatial predictions in environmental sciences.


1986 ◽  
Vol 15 (4) ◽  
pp. 389-393
Author(s):  
Dennis I. Misler

Training offices often have to decide whether to develop and present training with in-house staff or contract out. In the Montgomery County (Maryland) Government, a policy of belt tightening created the need for more effective management development, and cut backs in the staff of the Training Unit made it necessary to contract out for training. The resulting success of the Management Development Program has spawned other training program opportunities, all of which have been made possible by the effective use of contractors.


2015 ◽  
pp. 321-332
Author(s):  
András Bödőcs ◽  
Gábor Kovács ◽  
Krisztián Anderkó

The first reconstruction of the centuriatio of Savaria was attempted by András Mócsy, who tried to draw itwith the utilization of mid-scale topographical maps. Since his publication there were no archaeological at-tempt in the last 40 years to prove his theory. In the last recent years we tried to continue the survey of theSavarian centuriatio’s existence with support of GIS methods. Fortunately, an interesting relationship wasnoted between the informations of some archaeological excavations and the aerial archaeological phenom-ena, thus, we were able to build a predictive model-network of the assumed centuriatio. The new grid totallydiffers from the previous reconstruction. The predictive model’s agglomeration of the assumed centuriatio-traces could be refined, and the refined model was controlled with the use of archaeological field survey andgeophysical survey as well. The new reconstruction resulted new opportunities in the interpretation of exca-vated sites or former known roman roads and aqueducts, discovered in the last decades. An other interestingrelationship could be found between the water courses that ran on the former territory of the colonia andthe roman field boundary system: the probable impact of the roman agriculture on the landscape that af-fected the “premodern” (prior to the modern stream regulations) watercourse system.


2020 ◽  
Author(s):  
Shelby Short ◽  
◽  
Luke Basler ◽  
David Morley ◽  
Sue L. Eggert ◽  
...  

2019 ◽  
Vol 255 ◽  
pp. 03004 ◽  
Author(s):  
Mat Asiah ◽  
Khidzir Nik Zulkarnaen ◽  
Deris Safaai ◽  
Mat Yaacob Nik Nurul Hafzan ◽  
Mohamad Mohd Saberi ◽  
...  

Despite of providing high quality of education, demand on predicting student academic performance become more critical to improve the quality and assisting students to achieve a great performance in their studies. The lack of existing an efficiency and accurate prediction model is one of the major issues. Predictive analytics can provide institution with intuitive and better decision making. The objective of this paper is to review current research activities related to academic analytics focusing on predicting student academic performance. Various methods have been proposed by previous researchers to develop the best performance model using variety of students data, techniques, algorithms and tools. Predictive modeling used in predicting student performance are related to several learning tasks such as classification, regression and clustering. To achieve best prediction model, a lot of variables have been chosen and tested to find most influential attributes to perform prediction. Accurate performance prediction will be helpful in order to provide guidance in learning process that will benefit to students in avoiding poor scores. The predictive model furthermore can help instructor to forecast course completion including student final grade which are directly correlated to student performance success. To harvest an effective predictive model, it requires a good input data and variables, suitable predictive method as well as powerful and robust prediction model.


Collections ◽  
2020 ◽  
pp. 155019062095153
Author(s):  
Ronald S. Krug ◽  
Peter J. Pilles

Many land managing agencies have policies that forbid the collection of artifacts during archaeological survey and, even under controlled situations, collection is determined to be an “Adverse Effect” under Section 106 compliance interpretations of the National Historic Preservation Act. The main rationale is that removal destroys the contextual information of the artifact in relation to the rest of the site. This paper argues that such “non-collecting policies” are short-sighted and do not “protect” artifacts from unauthorized removal. In these days of technology, when sub-meter GPS instruments and other tools are available to pinpoint the location of artifacts, we submit that not collecting artifacts with important information potential is deleterious to interpreting the archaeological record. This point will be made by a case study from the Coconino National Forest in northern Arizona that illustrates the excuse “if I don’t pick it up, someone else will,” is a correct assumption. Surface collections, properly documented, provide useful information that justifies their collection and curation for present-day and future research.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 5799-5799
Author(s):  
Benjamin Djulbegovic ◽  
Jennifer Berano Teh ◽  
Lennie Wong ◽  
Iztok Hozo ◽  
Saro H. Armenian

Background: Therapy-related heart failure (HF) is a leading cause of morbidity and mortality in patients who undergo successful HCT for hematological malignancies. To eventually help improve management of HF, timely and accurate diagnosis of HF is crucial. Currently, no established method for diagnosis of HF exist. One approach to help improve diagnosis and management of HF is to use predictive modeling to assess the likelihood of HF, using key predictors known to be associated with HF. Such models can, in turn, be used for bedside management, including implementation of early screening approaches. That said, many techniques for predictive modeling exist. Currently, it is not known if Artificial Intelligence machine learning (ML) approaches are superior to standard statistical techniques for the development of predictive models for clinical practice. Here we present a comparative analysis of traditional multivariable models with ML learning predictive modeling in an attempt to identify the best predictive model for diagnosis of HF after HCT. Methods: At City of Hope, we have established a large prospective cohort (>12,000 patients) of HCT survivors (HCT survivorship registry). This registry is dynamic, and interfaces with other registries and databases (e.g., electronically indexed inpatient and outpatient medical records, national death index [NDI]). We utilized natural language processing (NLP) to extract 13 key demographics and clinical data that are known to be associated with HF. For the purposes of this project, we extracted data from 1,834 patients (~15% sample) who underwent HCT between 1994 to 2004 to allow adequate follow-up for the development of HF. We fit and compared 6 models [standard logistic regression (glm), FFT (fast-and-frugal tree) decision model and four ML models: CART (classification and regression trees), SVM (support vector machine), NN (neural network) and RF (random forest)]. Data were randomly split (50:50) into training and validation samples; the ultimate assessment of the best algorithm was based on its performance (in terms of calibration and discrimination) in the validation sample. DeLong test was used to test for statistical differences in discrimination [i.e. area under the curve (AUC)] among the models. Results: The accuracy of NLP was consistently >95%. Only the standard logistic regression model LR (glm) resulted in a well-calibrated model (Hosmer-Lemeshow goodness of fitness test: p=0.104); all other models were miscalibrated. The standard glm model also had best discrimination properties (AUC=0.704 in training and 0.619 in validation set). CART performed the worst (AUC=0.5). Other ML models (RF, NN, and SVM) also showed modest discriminatory characteristics (AUCs of 0.547, 0.573, and 0.619, respectively). DeLong test indicated that all models outperformed CART model (at nominal p<0.05 ), but were statistically indistinguishable among each other (see Figure). Power of analysis was borderline sufficient for the glm model and very limited for ML models. Conclusions: None of tested models showed optimal performance characteristics for their use in clinical practice. ML models performed even worse than standard logistic models; given increasing use of ML models in medicine, we caution against the use of these models without adequate comparative testing. Figure Disclosures No relevant conflicts of interest to declare.


Author(s):  
SORABH GUPTA ◽  
P. C. TEWARI

The present paper deals with the opportunities for the availability predictive modeling of a thermal plant using Markov process and probabilistic approach. These opportunities will be identified by evaluation of a predictive model to be built for the steam generation system of a thermal power plant. This feasibility study covers two areas: development of a predictive model and evaluation of performance with the help of developed model. The present system under study consists of five subsystems with three feasible states: full working, reduced capacity working and failed. After drawing transition diagram, differential equations are generated and then a probabilistic simulated predictive model using Markov approach has been developed considering some assumptions. Availability matrix for each subsystem is also developed, which provide various availability levels. On the basis of this study, performance of each subsystem of steam generation system is evaluated and then maintenance decisions are made for subsystems.


Sign in / Sign up

Export Citation Format

Share Document