Relating Ndesign to Field Compaction: A Case Study in Minnesota

Author(s):  
Tianhao Yan ◽  
Mugurel Turos ◽  
Chelsea Bennett ◽  
John Garrity ◽  
Mihai Marasteanu

High field density helps in increasing the durability of asphalt pavements. In a current research effort, the University of Minnesota and the Minnesota Department of Transportation (MnDOT) have been working on designing asphalt mixtures with higher field densities. One critical issue is the determination of the Ndesign values for these mixtures. The physical meaning of Ndesign is discussed first. Instead of the traditional approach, in which Ndesign represents a measure of rutting resistance, Ndesign is interpreted as an indication of the compactability of mixtures. The field density data from some recent Minnesota pavement projects are analyzed. A clear negative correlation between Ndesign and field density level is identified, which confirms the significant effect of Ndesign on the compactability and consequently on the field density of mixtures. To achieve consistency between the laboratory and field compaction, it is proposed that Ndesign should be determined to reflect the real field compaction effort. A parameter called the equivalent number of gyrations to field compaction effort (Nequ) is proposed to quantify the field compaction effort, and the Nequ values for some recent Minnesota pavement projects are calculated. The results indicate that the field compaction effort for the current Minnesota projects evaluated corresponds to about 30 gyrations of gyratory compaction. The computed Nequ is then used as the Ndesign for a Superpave 5 mixture placed in a paving project, for which field density data and laboratory performance test results are obtained. The data analysis shows that both the field density and pavement performance of the Superpave 5 mixture are significantly improved compared with the traditional mixtures. The results indicate that Nequ provides a reasonable estimation of field compaction effort, and that Nequ can be used as the Ndesign for achieving higher field densities.

Author(s):  
Tianhao Yan ◽  
Mihai Marasteanu ◽  
Chelsea Bennett ◽  
John Garrity

In a current research effort, University of Minnesota and Minnesota Department of Transportation have been working on designing asphalt mixtures that can be constructed at 5% air voids, similar to the Superpave 5 mix design. High field density of asphalt mixtures is desired because it increases the durability and extends the service life of asphalt pavements. The paper investigates the current situation of field densities in Minnesota, to better understand how much improvement is needed from the current field density level to the desired level, and to identify possible changes to the current mix design to improve field compactability. Field densities and material properties of 15 recently constructed projects in Minnesota are investigated. First, a statistical analysis is performed to study the probability distribution of field densities. Then, a two-way analysis of variance is conducted to check if the nominal maximum aggregate size and traffic levels have any significant effect on field densities. A correlation analysis is then conducted to identify significant correlations between the compactability of mixtures and their material properties. The results show that the field density data approximately obey normal distribution, with an average field density of 93.4% of theoretical maximum specific gravity; there are significant differences in field density between mixtures with different traffic levels; compactability of mixtures is significantly correlated with fine aggregate angularity and fine aggregate gradation of the mixtures.


2021 ◽  
pp. 1-12
Author(s):  
Zhiyu Yan ◽  
Shuang Lv

Accurate prediction of traffic flow is of great significance for alleviating urban traffic congestions. Most previous studies used historical traffic data, in which only one model or algorithm was adopted by the whole prediction space and the differences in various regions were ignored. In this context, based on time and space heterogeneity, a Classification and Regression Trees-K-Nearest Neighbor (CART-KNN) Hybrid Prediction model was proposed to predict short-term taxi demand. Firstly, a concentric partitioning method was applied to divide the test area into discrete small areas according to its boarding density level. Then the CART model was used to divide the dataset of each area according to its temporal characteristics, and KNN was established for each subset by using the corresponding boarding density data to estimate the parameters of the KNN model. Finally, the proposed method was tested on the New York City Taxi and Limousine Commission (TLC) data, and the traditional KNN model, backpropagation (BP) neural network, long-short term memory model (LSTM) were used to compare with the proposed CART-KNN model. The selected models were used to predict the demand for taxis in New York City, and the Kriging Interpolation was used to obtain all the regional predictions. From the results, it can be suggested that the proposed CART-KNN model performed better than other general models by showing smaller mean absolute percentage error (MAPE) and root mean square error (RMSE) value. The improvement of prediction accuracy of CART-KNN model is helpful to understand the regional demand pattern to partition the boarding density data from the time and space dimensions. The partition method can be extended into many models using traffic data.


1994 ◽  
Vol 4 (3) ◽  
pp. 254-271 ◽  
Author(s):  
Gary C. Alexander ◽  
Linda Keller

Shared decision making and shared leadership from multiple perspectives is essential in order for true educational transformation to occur. A collaborative research effort between the University of Minnesota and the Minnesota Office of Educational Leadership (OEL) provided data on the perceptions of principals at twenty-two urban, suburban, and rural schools participating in the transformation process. Ethnographic techniques were used to gather data to understand the development of leadership skills, shared governance, and shared vision at individual sites Findings from the indepth interviews indicate that an awareness of obstacles to change are a necessary first step toward implementing change; a majority of principals support some degree of site-centered decision making; central office administration needs to facilitate site autonomy; and examples of site autonomy and true shared governance exist.


Author(s):  
Vittal S. Anantatmula ◽  
James B. Webb

Critical Path (CP) method has been under scrutiny in recent years as the next evolution of project schedule development, the Critical Chain (CC) project management is gaining attention. Advocates of the Critical Chain method cite the Critical Path method's failure to address uncertainty properly. The purpose of this paper is to apply some of the features of the Critical Chain concepts to traditional approach of Critical Path for projects. More importantly, this research effort aims to demonstrate the applicability of CCPM to managing a portfolio of projects. The analysis, based on a critical review of past studies, experiments in both Critical Path and Critical Chain techniques, and a case study, presents recommendations to gain benefits of Critical Chain in a traditional Critical Path scheduling environment and to manage portfolio of projects or programs using some of the concepts of the Critical Chain Method.


2013 ◽  
Vol 823 ◽  
pp. 392-395
Author(s):  
Chun Yan Wang ◽  
Yi Bo Wang

To simulate the real field environment far field, to obtain the best performance of the laser measurements of the turntable tracking system to be developed, can be locked by means of the tracking of the target, the turntable adjusting the rotational angular velocity tracking system size and rotation means, so as to simulate different motion state space moving target effect on the laser irradiation to test the performance targets to achieve the targeting system performance evaluation.


2018 ◽  
Vol 22 (4) ◽  
pp. 175-181
Author(s):  
P. Ghanati ◽  
H. MohammadZadeh

Background and Study Aim: The purpose of this study was to investigate the effect of the game based on educational method and traditional approach on the performance of selected basketball skills. Materials : The type of research was semi-experimental one. Participants included 30 adolescent girls who were divided into two groups based on the game-based practice (15) and traditional training (15) build on the pre-test scores. Both groups performed the intervention program for 8 weeks and each week for three 60-minute sessions. Then, a post-test was performed and the data were analyzed using SPSS 21 software at a significance level of 0.05. Results: The results showed that both educational method and traditional approach groups had a significant improvement in basketball performance; there was no improvement in basketball dribbling performance in both groups. However, in the performance test, game based on educational method group had a significant improvement compared to the traditional practice one. Conclusion: The results suggest that using a game-based educational method can significantly increase the important factors of basketball performance in youth, which can move into more complex situations.


2021 ◽  
Vol 9 (6) ◽  
pp. 594
Author(s):  
Tafsir Matin Johansson ◽  
Dimitrios Dalaklis ◽  
Aspasia Pastra

The current regulatory landscape that applies to maritime service robotics, aptly termed as robotics and autonomous systems (RAS), is quite complex. When it comes to patents, there are multifarious considerations in relation to vessel survey, inspection, and maintenance processes under national and international law. Adherence is challenging, given that the traditional delivery methods are viewed as unsafe, strenuous, and laborious. Service robotics, namely micro aerial vehicles (MAVs) or drones, magnetic-wheeled crawlers (crawlers), and remotely operated vehicles (ROVs), function by relying on the architecture of the Internet of Robotic Things. The aforementioned are being introduced as time-saving apparatuses, accompanied by the promise to acquire concrete and sufficient data for the identification of vessel structural weaknesses with the highest level of accuracy to facilitate decision-making processes upon which temporary and permanent measures are contingent. Nonetheless, a noticeable critical issue associated with RAS effective deployment revolves around non-personal data governance, which comprises the main analytical focus of this research effort. The impetus behind this study stems from the need to enquire whether “data” provisions within the realm of international technological regulatory (techno-regulatory) framework is sufficient, well organized, and harmonized so that there are no current or future conflicts with promulgated theoretical dimensions of data that drive all subject matter-oriented actions. As is noted from the relevant expository research, the challenges are many. Engineering RAS to perfection is not the end-all and be-all. Collateral impediments must be avoided. A safety net needs to be devised to protect non-personal data. The results here indicate that established data decision dimensions call for data security and protection, as well as a consideration of ownership and liability details. An analysis of the state-of-the-art and the comparative results assert that the abovementioned remain neglected in the current international setting. The findings reveal specific data barriers within the existing international framework. The ways forward include strategic actions to remove data barriers towards overall efficacy of maritime RAS operations. The overall findings indicate that an effective transition to RAS operations requires optimizing the international regulatory framework for opening the pathways for effective RAS operations. Conclusions were drawn based on the premise that policy reform is inevitable in order to push the RAS agenda forward before the emanation of 6G and the era of the Internet of Everything, with harmonization and further standardization being very high priority issues.


1998 ◽  
Vol 1636 (1) ◽  
pp. 124-131 ◽  
Author(s):  
Dan Turner ◽  
Marsha Nitzburg ◽  
Richard Knoblauch

Motorists driving at night are two to three times more likely to be involved in a crash than during the day. Although, about half of the motor vehicle deaths occur at night, death rates based on miles driven are about four times higher at night than during the day. Nighttime driving also frustrates a large number of people, the majority of which are seniors. There is an effort under way to evaluate the use of supplemental ultraviolet (UV) automobile headlights to increase nighttime visibility. Research conducted in Sweden has shown very promising results, and a preliminary field research effort recently completed in the United States found that the visibility of pavement markings increased 25 percent with UV, and subjects generally favored its use. An extensive field study was conducted to determine the conditions under which driver performance could be improved with fluorescent traffic control devices and auxiliary UV headlights. Several static tests were done to evaluate fluorescent pavement markings, post-mounted delineators, and various pedestrian scenes under two headlight conditions (low beam only and low beam with UV). Dynamic tests included a subjective evaluation of two headlamp conditions and a performance test in which subjects drove an instrumented vehicle. The results of the field study indicated that pavement markings could be observed 30 percent further, and pedestrians could be observed over 90 percent further with the addition of UV. Subjects consistently evaluated the use of UV headlamps as beneficial.


2016 ◽  
pp. 1005-1022
Author(s):  
Vittal S. Anantatmula ◽  
James B. Webb

Critical Path (CP) method has been under scrutiny in recent years as the next evolution of project schedule development, the Critical Chain (CC) project management is gaining attention. Advocates of the Critical Chain method cite the Critical Path method's failure to address uncertainty properly. The purpose of this paper is to apply some of the features of the Critical Chain concepts to traditional approach of Critical Path for projects. More importantly, this research effort aims to demonstrate the applicability of CCPM to managing a portfolio of projects. The analysis, based on a critical review of past studies, experiments in both Critical Path and Critical Chain techniques, and a case study, presents recommendations to gain benefits of Critical Chain in a traditional Critical Path scheduling environment and to manage portfolio of projects or programs using some of the concepts of the Critical Chain Method.


SPE Journal ◽  
2019 ◽  
Vol 24 (04) ◽  
pp. 1452-1467 ◽  
Author(s):  
Rolf J. Lorentzen ◽  
Xiaodong Luo ◽  
Tuhin Bhakta ◽  
Randi Valestrand

Summary In this paper, we use a combination of acoustic impedance and production data for history matching the full Norne Field. The purpose of the paper is to illustrate a robust and flexible work flow for assisted history matching of large data sets. We apply an iterative ensemble-based smoother, and the traditional approach for assisted history matching is extended to include updates of additional parameters representing rock clay content, which has a significant effect on seismic data. Further, for seismic data it is a challenge to properly specify the measurement noise, because the noise level and spatial correlation between measurement noise are unknown. For this purpose, we apply a method based on image denoising for estimating the spatially correlated (colored) noise level in the data. For the best possible evaluation of the workflow performance, all data are synthetically generated in this study. We assimilate production data and seismic data sequentially. First, the production data are assimilated using traditional distance-based localization, and the resulting ensemble of reservoir models is then used when assimilating seismic data. This procedure is suitable for real field applications, because production data are usually available before seismic data. If both production data and seismic data are assimilated simultaneously, the high number of seismic data might dominate the overall history-matching performance. The noise estimation for seismic data involves transforming the observations to a discrete wavelet domain. However, the resulting data do not have a clear spatial position, and the traditional distance-based localization schemes used to avoid spurious correlations and underestimated uncertainty (because of limited ensemble size), are not possible to apply. Instead, we use a localization scheme that is based on correlations between observations and parameters that does not rely on physical position for model variables or data. This method automatically adapts to each observation and iteration. The results show that we reduce data mismatch for both production and seismic data, and that the use of seismic data reduces estimation errors for porosity, permeability, and net-to-gross ratio (NTG). Such improvements can provide useful information for reservoir management and planning for additional drainage strategies.


Sign in / Sign up

Export Citation Format

Share Document