B. EKSTRAKSI TANIN DARI KAYU PINUS

2018 ◽  
Vol 1 (2) ◽  
pp. 10-15
Author(s):  
Hermien Noorhajati ◽  
F Agus Santoso
Keyword(s):  

Untuk memperoleh tanin dari kayu pinus kita menggunakan metode ekstraksi. Sedangkan data operasi ekstraksi tanin dari kayu pinus belum begitu banyak, sehingga penelitian ini masih layak dilakukan. Tujuan dari penelitian ini untuk megetahui pengaruh konsentrasi solvent, lama ekstraksi dan rasio sample-solvent terhadap yield tanin. Caranya mencampurkan 20 gram pinus dengan larutan alkohol sebagai solvent dalam berbagai konsentrasidan berbagai volume sebagai variable dalam labu leher tiga, selama berbagai waktu sebagai variabel ekstraksi ini dilakukan pada suhu 60o C. Hasilnya disaring, dianalisa kandungan taninnya. Dari hasil penelitian ini didapatkan data yield tannin, kemudian dievaluasi dan diperoleh kesimpulan bahwa semakin besar konsentrasi solvent, yield tanin semakin besar, semakin lama waktu ekstraksi, yield tanin semakin besar, tetapi pada waktu tertentu yield tanin konstan. Semakin besar volume solvent yield tanin semakin besar. Yield tanin terbesar pada penelitian ini diperoleh pada kondisi operasi waktu ekstraksi 5 jam, volume pelarut 300 ml, suhu 70o C, rasio berat sample per volume pelarut (gr/ml) : 20 gr / 300ml yaitu sebesar 8,85 %.

2020 ◽  
Author(s):  
Shu-Chun Kuo ◽  
CHIEN WEI ◽  
Willy Chou

UNSTRUCTURED The recent article published on December 23 27 in 2020 is well-written and of interest, but remains several questions that are required for clarifications, including (1) 30 feature variables with normalized format(mean=0 and SD=1) required to compare model accuracy with those with the raw-data format; (2)inconsistency in variable numbers between entry and preview panels in Figure 4 and reference typos; and (3) data-entry format with raw blood laboratory results in Figure 4 inconsistent with the model designed using normalized data to estimate parameters. We conducted a study using the training and testing data provided by the previous study. An artificial neural network(ANN) model was performed to estimate parameters and compare the model accuracy with those eight models provided by the previous study. We found that (1) normalized data yield higher accuracy than that with the raw data; (2) typos definitely exist at the bottom review (=32>30 variables in the entry) panels in Figure 4 and typos in Table 6; and (3)the ANN earns a probability of survival(=0.91) higher than that(=0.71) in the previous study using the similar entry data when the raw data are assumed in the app. We also demonstrated an author-made app using the visualization to display the prediction result, which is novel and innovative to make the result improved with a dashboard in comparison with the previous study.


ReCALL ◽  
2021 ◽  
pp. 1-17
Author(s):  
Cédric Brudermann ◽  
Muriel Grosbois ◽  
Cédric Sarré

Abstract In a previous study (Sarré, Grosbois & Brudermann, 2019), we explored the effects of various corrective feedback (CF) strategies on interlanguage development for the online component of a blended English as a foreign language (EFL) course we had designed and implemented. Our results showed that unfocused indirect CF (feedback on all error types through the provision of metalinguistic comments on the nature of the errors made) combined with extra computer-mediated micro-tasks was the most efficient CF type to foster writing accuracy development in our context. Following up on this study, this paper further explores the effects of this specific CF type on learners’ written accuracy development in an online EFL course designed for freshmen STEM (science, technology, engineering, and mathematics) students. In the online course under study, this specific CF type was experimented with different cohorts of STEM learners (N = 1,150) over a five-year period (from 2014 to 2019) and was computer-assisted: CF provision online by a human tutor was combined with predetermined CF comments. The aim of this paper is to investigate the impact of this specific CF strategy on error types. In this respect, the data yield encouraging results in terms of writing accuracy development when learners benefit from this computer-assisted specific CF. This study thus helps to gain a better understanding of the role that CF plays in shaping students’ revision processes and could inform language (teacher) education regarding the use of digital tools for the development of foreign language accuracy and the issues related to online CF provision.


Genetics ◽  
1998 ◽  
Vol 150 (1) ◽  
pp. 459-472 ◽  
Author(s):  
Hongyu Zhao ◽  
Terence P Speed

Abstract Ordered tetrad data yield information on chromatid interference, chiasma interference, and centromere locations. In this article, we show that the assumption of no chromatid interference imposes certain constraints on multilocus ordered tetrad probabilities. Assuming no chromatid interference, these constraints can be used to order markers under general chiasma processes. We also derive multilocus tetrad probabilities under a class of chiasma interference models, the chi-square models. Finally, we compare centromere map functions under the chi-square models with map functions proposed in the literature. Results in this article can be applied to order genetic markers and map centromeres using multilocus ordered tetrad data.


2014 ◽  
Vol 70 (3) ◽  
pp. 248-256 ◽  
Author(s):  
Julian Henn ◽  
Kathrin Meindl

The formerly introduced theoreticalRvalues [Henn & Schönleber (2013).Acta Cryst.A69, 549–558] are used to develop a relative indicator of systematic errors in model refinements,Rmeta, and applied to published charge-density data. The counter ofRmetagives an absolute measure of systematic errors in percentage points. The residuals (Io−Ic)/σ(Io) of published data are examined. It is found that most published models correspond to residual distributions that are not consistent with the assumption of a Gaussian distribution. The consistency with a Gaussian distribution, however, is important, as the model parameter estimates and their standard uncertainties from a least-squares procedure are valid only under this assumption. The effect of correlations introduced by the structure model is briefly discussed with the help of artificial data and discarded as a source of serious correlations in the examined example. Intensity and significance cutoffs applied in the refinement procedure are found to be mechanisms preventing residual distributions from becoming Gaussian. Model refinements against artificial data yield zero or close-to-zero values forRmetawhen the data are not truncated and small negative values in the case of application of a moderate cutoffIo> 0. It is well known from the literature that the application of cutoff values leads to model bias [Hirshfeld & Rabinovich (1973).Acta Cryst.A29, 510–513].


2011 ◽  
Vol 139 (7) ◽  
pp. 2276-2289 ◽  
Author(s):  
Arthur A. Small ◽  
Jason B. Stefik ◽  
Johannes Verlinde ◽  
Nathaniel C. Johnson

Abstract A decision algorithm is presented that improves the productivity of data collection activities in stochastic environments. The algorithm was developed in the context of an aircraft field campaign organized to collect data in situ from boundary layer clouds. Required lead times implied that aircraft deployments had to be scheduled in advance, based on imperfect forecasts regarding the presence of conditions meeting specified requirements. Given an overall cap on the number of flights, daily fly/no-fly decisions were taken traditionally using a discussion-intensive process involving heuristic analysis of weather forecasts by a group of skilled human investigators. An alternative automated decision process uses self-organizing maps to convert weather forecasts into quantified probabilities of suitable conditions, together with a dynamic programming procedure to compute the opportunity costs of using up scarce flights from the limited budget. Applied to conditions prevailing during the 2009 Routine ARM Aerial Facility (AAF) Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) campaign of the U.S. Department of Energy’s Atmospheric Radiation Measurement Program, the algorithm shows a 21% increase in data yield and a 66% improvement in skill over the heuristic decision process used traditionally. The algorithmic approach promises to free up investigators’ cognitive resources, reduce stress on flight crews, and increase productivity in a range of data collection applications.


1977 ◽  
Vol 67 (3) ◽  
pp. 751-769
Author(s):  
Nazieh K. Yacoub ◽  
Brian J. Mitchell

abstract Surface waves generated by six earthquakes and two nuclear explosions are used to study the attenuation coefficients of the fundamental Rayleigh mode across Eurasia. Rayleigh-wave amplitude data yield average attenuation coefficients at periods between 4 and 50 sec. The data exhibit relatively large standard deviations and in some cases the average attenuation coefficients take on negative values which may be due to regional variations of the attenuative properties of the crust, lateral refraction, multipathing and scattering. A method has been developed to investigate the regional variation in the attenuative properties of the Eurasian crust and its effect on surface-wave amplitude data, employing the evaluated average attenuation coefficients for the fundamental Rayleigh mode. For this investigation, Eurasia is divided into two regions, one considered to be relatively stable, and the other considered to be tectonic in nature. This regionalization shows that the tectonic regions exhibit higher attenuation than the stable regions in the period range below about 20 sec, whereas in the period range above about 20 sec, no clear difference can be observed for the two regions. Although the effects of lateral refraction and multipathing may still significantly affect the observations, the regionalization lowers the standard deviations considerably and eliminates the negative values which were obtained in the unregionalized determinations.


1995 ◽  
Vol 73 (11) ◽  
pp. 1831-1840 ◽  
Author(s):  
Bojie Wang ◽  
Peter R. Ogilby

A recently developed spectroscopic technique was used to determine oxygen diffusion coefficients as a function of temperature for polystyrene and polycarbonate films. Data were recorded at total pressures <300 Torr over the temperature range 5–45 °C under conditions in which argon, helium, and nitrogen, respectively, were copenetrants. In all cases, the presence of the additional gas caused an increase in the oxygen diffusion coefficient. Arrhenius plots of the data yield (a) a diffusion activation barrier, Eact, and (b) a diffusion coefficient, D0, that represents the condition of "barrier-free" gas transport for the temperature domain over which the Arrhenius plot is linear. For all cases examined in both polystyrene and polycarbonate, D0 increased with an increase in the partial pressure of added gas. In polystyrene, the presence of an additional gas did not change Eact. In polycarbonate, Eact obtained in the presence of helium and argon likewise did not differ from that obtained in the absence of the copenetrant. When nitrogen was the added gas, however, a larger value of Eact was obtained. This latter observation is interpreted to reflect the plasticization of polycarbonate by nitrogen. Eact and D0 data are discussed within the context of a model that distinguishes between dynamic and static elements of free volume in the polymer matrix. Keywords: oxygen diffusion, polystyrene, polycarbonate, activation barrier.


Author(s):  
Valeria Gelardi ◽  
Jeanne Godard ◽  
Dany Paleressompoulle ◽  
Nicolas Claidiere ◽  
Alain Barrat

Network analysis represents a valuable and flexible framework to understand the structure of individual interactions at the population level in animal societies. The versatility of network representations is moreover suited to different types of datasets describing these interactions. However, depending on the data collection method, different pictures of the social bonds between individuals could a priori emerge. Understanding how the data collection method influences the description of the social structure of a group is thus essential to assess the reliability of social studies based on different types of data. This is however rarely feasible, especially for animal groups, where data collection is often challenging. Here, we address this issue by comparing datasets of interactions between primates collected through two different methods: behavioural observations and wearable proximity sensors. We show that, although many directly observed interactions are not detected by the sensors, the global pictures obtained when aggregating the data to build interaction networks turn out to be remarkably similar. Moreover, sensor data yield a reliable social network over short time scales and can be used for long-term studies, showing their important potential for detailed studies of the evolution of animal social groups.


2010 ◽  
Vol 46 (1) ◽  
pp. 1-9 ◽  
Author(s):  
Y. Du ◽  
J. Wang ◽  
Y.F. Ouyang ◽  
L.J. Zhang ◽  
Z.H. Yuan ◽  
...  

An integrated approach of experiment and theoretical computation to acquire enthalpies of formation for ternary compounds is described. The enthalpies of formation (DHf ) for Al71Fe19Si10 and Al31Mn6Ni2 are measured via a calorimeter. Miedema model, CALPHAD and first-principles method are employed to calculate DHf for the above compounds and several Al-based ternary compounds. It is found that first-principles generated data yield good agreements with experimental values and thus can be used as key 'experimental data', which are needed for CALPHAD approach.


Sign in / Sign up

Export Citation Format

Share Document