Activities for Students: Using Graphing Calculators to Model Real-World Data

2004 ◽  
Vol 97 (5) ◽  
pp. 328-342
Author(s):  
Berchie W. Holliday ◽  
Lauren R. Duff

Mathematics teachers understand that calculators have revolutionized the teaching of secondary school mathematics. After students have demonstrated their abilities to perform such computations without calculators, calculators can free students and teachers from performing redundant computations. Graphing calculators, in particular, free students from computing dependent values needed to construct line graphs, for example. But one problem is how to teach students to use a graphing calculator to plot, calculate, and graph linear equations of best fit from realworld data. Another problem is getting students to engage in the task and construct an increasingly useful conceptualization of linear modeling. In the beginning, teachers should, perhaps, provide direct instruction, followed by modeling how to enter and graph data sets efficiently.

2009 ◽  
Vol 103 (1) ◽  
pp. 62-68
Author(s):  
Kathleen Cage Mittag ◽  
Sharon Taylor

Using activities to create and collect data is not a new idea. Teachers have been incorporating real-world data into their classes since at least the advent of the graphing calculator. Plenty of data collection activities and data sets exist, and the graphing calculator has made modeling data much easier. However, the authors were in search of a better physical model for a quadratic. We wanted students to see an actual parabola take shape in real time and then explore its characteristics, but we could not find such a hands-on model.


Author(s):  
K Sobha Rani

Collaborative filtering suffers from the problems of data sparsity and cold start, which dramatically degrade recommendation performance. To help resolve these issues, we propose TrustSVD, a trust-based matrix factorization technique. By analyzing the social trust data from four real-world data sets, we conclude that not only the explicit but also the implicit influence of both ratings and trust should be taken into consideration in a recommendation model. Hence, we build on top of a state-of-the-art recommendation algorithm SVD++ which inherently involves the explicit and implicit influence of rated items, by further incorporating both the explicit and implicit influence of trusted users on the prediction of items for an active user. To our knowledge, the work reported is the first to extend SVD++ with social trust information. Experimental results on the four data sets demonstrate that our approach TrustSVD achieves better accuracy than other ten counterparts, and can better handle the concerned issues.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 507
Author(s):  
Piotr Białczak ◽  
Wojciech Mazurczyk

Malicious software utilizes HTTP protocol for communication purposes, creating network traffic that is hard to identify as it blends into the traffic generated by benign applications. To this aim, fingerprinting tools have been developed to help track and identify such traffic by providing a short representation of malicious HTTP requests. However, currently existing tools do not analyze all information included in the HTTP message or analyze it insufficiently. To address these issues, we propose Hfinger, a novel malware HTTP request fingerprinting tool. It extracts information from the parts of the request such as URI, protocol information, headers, and payload, providing a concise request representation that preserves the extracted information in a form interpretable by a human analyst. For the developed solution, we have performed an extensive experimental evaluation using real-world data sets and we also compared Hfinger with the most related and popular existing tools such as FATT, Mercury, and p0f. The conducted effectiveness analysis reveals that on average only 1.85% of requests fingerprinted by Hfinger collide between malware families, what is 8–34 times lower than existing tools. Moreover, unlike these tools, in default mode, Hfinger does not introduce collisions between malware and benign applications and achieves it by increasing the number of fingerprints by at most 3 times. As a result, Hfinger can effectively track and hunt malware by providing more unique fingerprints than other standard tools.


2021 ◽  
pp. 1-13
Author(s):  
Qingtian Zeng ◽  
Xishi Zhao ◽  
Xiaohui Hu ◽  
Hua Duan ◽  
Zhongying Zhao ◽  
...  

Word embeddings have been successfully applied in many natural language processing tasks due to its their effectiveness. However, the state-of-the-art algorithms for learning word representations from large amounts of text documents ignore emotional information, which is a significant research problem that must be addressed. To solve the above problem, we propose an emotional word embedding (EWE) model for sentiment analysis in this paper. This method first applies pre-trained word vectors to represent document features using two different linear weighting methods. Then, the resulting document vectors are input to a classification model and used to train a text sentiment classifier, which is based on a neural network. In this way, the emotional polarity of the text is propagated into the word vectors. The experimental results on three kinds of real-world data sets demonstrate that the proposed EWE model achieves superior performances on text sentiment prediction, text similarity calculation, and word emotional expression tasks compared to other state-of-the-art models.


Author(s):  
Martyna Daria Swiatczak

AbstractThis study assesses the extent to which the two main Configurational Comparative Methods (CCMs), i.e. Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA), produce different models. It further explains how this non-identity is due to the different algorithms upon which both methods are based, namely QCA’s Quine–McCluskey algorithm and the CNA algorithm. I offer an overview of the fundamental differences between QCA and CNA and demonstrate both underlying algorithms on three data sets of ascending proximity to real-world data. Subsequent simulation studies in scenarios of varying sample sizes and degrees of noise in the data show high overall ratios of non-identity between the QCA parsimonious solution and the CNA atomic solution for varying analytical choices, i.e. different consistency and coverage threshold values and ways to derive QCA’s parsimonious solution. Clarity on the contrasts between the two methods is supposed to enable scholars to make more informed decisions on their methodological approaches, enhance their understanding of what is happening behind the results generated by the software packages, and better navigate the interpretation of results. Clarity on the non-identity between the underlying algorithms and their consequences for the results is supposed to provide a basis for a methodological discussion about which method and which variants thereof are more successful in deriving which search target.


2021 ◽  
Vol 13 (3) ◽  
pp. 530
Author(s):  
Junjun Yin ◽  
Jian Yang

Pseudo quad polarimetric (quad-pol) image reconstruction from the hybrid dual-pol (or compact polarimetric (CP)) synthetic aperture radar (SAR) imagery is a category of important techniques for radar polarimetric applications. There are three key aspects concerned in the literature for the reconstruction methods, i.e., the scattering symmetric assumption, the reconstruction model, and the solving approach of the unknowns. Since CP measurements depend on the CP mode configurations, different reconstruction procedures were designed when the transmit wave varies, which means the reconstruction procedures were not unified. In this study, we propose a unified reconstruction framework for the general CP mode, which is applicable to the mode with an arbitrary transmitted ellipse wave. The unified reconstruction procedure is based on the formalized CP descriptors. The general CP symmetric scattering model-based three-component decomposition method is also employed to fit the reconstruction model parameter. Finally, a least squares (LS) estimation method, which was proposed for the linear π/4 CP data, is extended for the arbitrary CP mode to estimate the solution of the system of non-linear equations. Validation is carried out based on polarimetric data sets from both RADARSAT-2 (C-band) and ALOS-2/PALSAR (L-band), to compare the performances of reconstruction models, methods, and CP modes.


2021 ◽  
pp. 1-13
Author(s):  
Hailin Liu ◽  
Fangqing Gu ◽  
Zixian Lin

Transfer learning methods exploit similarities between different datasets to improve the performance of the target task by transferring knowledge from source tasks to the target task. “What to transfer” is a main research issue in transfer learning. The existing transfer learning method generally needs to acquire the shared parameters by integrating human knowledge. However, in many real applications, an understanding of which parameters can be shared is unknown beforehand. Transfer learning model is essentially a special multi-objective optimization problem. Consequently, this paper proposes a novel auto-sharing parameter technique for transfer learning based on multi-objective optimization and solves the optimization problem by using a multi-swarm particle swarm optimizer. Each task objective is simultaneously optimized by a sub-swarm. The current best particle from the sub-swarm of the target task is used to guide the search of particles of the source tasks and vice versa. The target task and source task are jointly solved by sharing the information of the best particle, which works as an inductive bias. Experiments are carried out to evaluate the proposed algorithm on several synthetic data sets and two real-world data sets of a school data set and a landmine data set, which show that the proposed algorithm is effective.


Author(s):  
Chen Lin ◽  
Xiaolin Shen ◽  
Si Chen ◽  
Muhua Zhu ◽  
Yanghua Xiao

The study of consumer psychology reveals two categories of consumption decision procedures: compensatory rules and non-compensatory rules. Existing recommendation models which are based on latent factor models assume the consumers follow the compensatory rules, i.e. they evaluate an item over multiple aspects and compute a weighted or/and summated score which is used to derive the rating or ranking of the item. However, it has been shown in the literature of consumer behavior that, consumers adopt non-compensatory rules more often than compensatory rules. Our main contribution in this paper is to study the unexplored area of utilizing non-compensatory rules in recommendation models.Our general assumptions are (1) there are K universal hidden aspects. In each evaluation session, only one aspect is chosen as the prominent aspect according to user preference. (2) Evaluations over prominent and non-prominent aspects are non-compensatory. Evaluation is mainly based on item performance on the prominent aspect. For non-prominent aspects the user sets a minimal acceptable threshold. We give a conceptual model for these general assumptions. We show how this conceptual model can be realized in both pointwise rating prediction models and pair-wise ranking prediction models. Experiments on real-world data sets validate that adopting non-compensatory rules improves recommendation performance for both rating and ranking models.


2016 ◽  
Vol 6 (2) ◽  
pp. 29-45
Author(s):  
Francis Nzuki

By taking into consideration the significance of the socio-economic contexts, this research investigates teachers' perceptions of the role of graphing calculators, as mediating tools, to help facilitate mathematics instruction of students from two different SES backgrounds. The main source of data are in-depth semi-structured interviews with four teachers, two from each SES school. In general, the participants' perceptions of the role of the graphing calculator were dependent on the context within which it was used. Also, the participants played a crucial role in determining the nature of graphing calculator use with the low-SES school's participants appearing not to involve their students in lessons that capitalized on the powerful characteristics of graphing calculators. To tease out the role of the situation context, a four-component framework was conceptualized consisting of teacher, student, subject matter, and graphing calculator use. The components of the framework were taken to be continuously in interaction with one another implying that a change or perturbation in one of the components affected all the other components. The continuous interactions of the components of this framework suggest that equity issues in connection to the nature of graphing calculator use should be an ongoing process that is continuously locating strategies that will afford all students appropriate access and use of graphing calculators.


2017 ◽  
Vol 26 (11) ◽  
pp. 1750124 ◽  
Author(s):  
E. Ebrahimi ◽  
H. Golchin ◽  
A. Mehrabi ◽  
S. M. S. Movahed

In this paper, we investigate ghost dark energy model in the presence of nonlinear interaction between dark energy and dark matter. We also extend the analysis to the so-called generalized ghost dark energy (GGDE) which [Formula: see text]. The model contains three free parameters as [Formula: see text] and [Formula: see text] (the coupling coefficient of interactions). We propose three kinds of nonlinear interaction terms and discuss the behavior of equation of state, deceleration and dark energy density parameters of the model. We also find the squared sound speed and search for signs of stability of the model. To compare the interacting GGDE model with observational data sets, we use more recent observational outcomes, namely SNIa from JLA catalog, Hubble parameter, baryonic acoustic oscillation and the most relevant CMB parameters including, the position of acoustic peaks, shift parameters and redshift to recombination. For GGDE with the first nonlinear interaction, the joint analysis indicates that [Formula: see text], [Formula: see text] and [Formula: see text] at 1 optimal variance error. For the second interaction, the best fit values at [Formula: see text] confidence are [Formula: see text], [Formula: see text] and [Formula: see text]. According to combination of all observational data sets considered in this paper, the best fit values for third nonlinearly interacting model are [Formula: see text], [Formula: see text] and [Formula: see text] at [Formula: see text] confidence interval. Finally, we found that the presence of interaction is compatible in mentioned models via current observational datasets.


Sign in / Sign up

Export Citation Format

Share Document