scholarly journals A New Measure of Complementarity in Market Basket Data

2021 ◽  
Vol 16 (4) ◽  
pp. 670-681
Author(s):  
Radosław Puka ◽  
Stanislaw Jedrusik

Modern IT systems collect detailed data on each activity, transaction, forum entry, conversation and many other areas. The availability of large data volumes in the business, industry and research fields opens up new opportunities for the empirical verification of various economic theories and laws. The analysis of big datasets in turn allows us to look at many issues from a new point of view and see the dependencies that are otherwise difficult to derive. In this paper, we propose a new measure for dependencies between goods in market basket data. The introduced measure was inspired by the well-known microeconomic concept of complementarity. Due to its similar properties to those of complementarity, the new measure was called basket complementarity (b-complementarity). B-complementarity not only measures the strength of dependencies between goods but also measures the direction of these dependencies. The values of the proposed measure can be relatively easily calculated using market basket data. This paper also presents a simple example illustrating this new concept, areas of possible application (e.g., in e-commerce) and preliminary results of searching for goods that meet the criteria of basket complementarity in real market basket data.

2021 ◽  
Vol 16 (1) ◽  
pp. 670-681
Author(s):  
Radosław Puka ◽  
Stanislaw Jedrusik

Modern IT systems collect detailed data on each activity, transaction, forum entry, conversation and many other areas. The availability of large data volumes in the business, industry and research fields opens up new opportunities for the empirical verification of various economic theories and laws. The analysis of big datasets in turn allows us to look at many issues from a new point of view and see the dependencies that are otherwise difficult to derive. In this paper, we propose a new measure for dependencies between goods in market basket data. The introduced measure was inspired by the well-known microeconomic concept of complementarity. Due to its similar properties to those of complementarity, the new measure was called basket complementarity (b-complementarity). B-complementarity not only measures the strength of dependencies between goods but also measures the direction of these dependencies. The values of the proposed measure can be relatively easily calculated using market basket data. This paper also presents a simple example illustrating this new concept, areas of possible application (e.g., in e-commerce) and preliminary results of searching for goods that meet the criteria of basket complementarity in real market basket data.


Metabolites ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 113
Author(s):  
Julia Koblitz ◽  
Sabine Will ◽  
S. Riemer ◽  
Thomas Ulas ◽  
Meina Neumann-Schaal ◽  
...  

Genome-scale metabolic models are of high interest in a number of different research fields. Flux balance analysis (FBA) and other mathematical methods allow the prediction of the steady-state behavior of metabolic networks under different environmental conditions. However, many existing applications for flux optimizations do not provide a metabolite-centric view on fluxes. Metano is a standalone, open-source toolbox for the analysis and refinement of metabolic models. While flux distributions in metabolic networks are predominantly analyzed from a reaction-centric point of view, the Metano methods of split-ratio analysis and metabolite flux minimization also allow a metabolite-centric view on flux distributions. In addition, we present MMTB (Metano Modeling Toolbox), a web-based toolbox for metabolic modeling including a user-friendly interface to Metano methods. MMTB assists during bottom-up construction of metabolic models by integrating reaction and enzymatic annotation data from different databases. Furthermore, MMTB is especially designed for non-experienced users by providing an intuitive interface to the most commonly used modeling methods and offering novel visualizations. Additionally, MMTB allows users to upload their models, which can in turn be explored and analyzed by the community. We introduce MMTB by two use cases, involving a published model of Corynebacterium glutamicum and a newly created model of Phaeobacter inhibens.


2021 ◽  
Vol 04 (1) ◽  
pp. 54-54
Author(s):  
V. R. Nigmatullin ◽  
◽  
I. R. Nigmatullin ◽  
R. G. Nigmatullin ◽  
A.M. Migranov ◽  
...  

Currently, to increase the efficiency of industrial production, high-performance and expensive technological equipment is increasingly used, in which the weakest link, from the point of view of efficiency and reliability, is the components and parts of heavily loaded tribo – couplings operating both at significantly different temperatures (conditionally under lighter conditions, the temperature difference can be 100-120 degrees) and climatic conditions (high humidity, the presence of abrasives and other chemical elements in the atmosphere). As the results of the analysis of the frequency of failures of friction units and, accordingly, the cost of their restoration reach 9-20 percent of the cost of all equipment, without taking into account significant losses of income (profit) of the enterprise from downtime. The solution of this problem is based on the study of the wear rate of friction units by the wear products accumulated in working oils, cooling lubricants, and greases. A digital equipment monitoring system (DSMT) has been developed and implemented, which includes dynamic recording of the number of wear products and oil temperature by original modern recording devices, followed by the technology of their processing and use. The system also includes methods for finding the necessary information in large data sets useful and necessary in theoretical and practical terms with a similar technique controlled by a digital monitoring system. The advantages of SMT are the ability to predict the reliability of the equipment; reduce production risks and significantly reduce inefficient costs.


Author(s):  
Vera Savchenko ◽  
◽  
Oleksandr Gai ◽  
Oksana Yurchenko ◽  
◽  
...  

The article considers the essence of accounting theories, approaches to their separation, the relationship of accounting and economic theories, and the direction of development of accounting theories in accordance with the needs of economic and social development. The approaches to the classification of accounting theories are generalized, as well as the approaches to the interpretation of «accounting theory», the peculiarities of the interpretation of the subject of accounting from the point of view of different accounting theories are revealed and the objectivity of expansion of accounting objects is substantiated. In the context of the formation and development of accounting theories, the category of «social costs» is considered as an accounting object.


Author(s):  
Mariusz Maciejczak ◽  
Adrian Słodki

The sector of micro, small and medium size enterprises is important for any economy. It is important also for Poland. Analyzing the industrial organization of this sector it was confirmed that the owners and managers of such companies are applying strategies, which are rational from their point of view, but not from the perspective of real market conditions. It is argued therefore that the game theory is for them a solution in enhancing competences and performance of their organizations. Based on randomized sample of Polish micro and small companies the paper aimed to find out if the managers apply the game theory rationales when choosing price strategy when enter the market. It was confirmed that they do not and that they don't play Nash equilibrium in the strategic interaction when it comes to the price level. There was applied maxmin strategy, which maximises the worst - case scenario from the game. Thus there is a real chance that if entrepreneurs would analyze the situation with respect of game theory, their strategies would be more accurate and provide better outcomes.


2011 ◽  
pp. 877-891
Author(s):  
Katrin Weller ◽  
Isabella Peters ◽  
Wolfgang G. Stock

This chapter discusses folksonomies as a novel way of indexing documents and locating information based on user generated keywords. Folksonomies are considered from the point of view of knowledge organization and representation in the context of user collaboration within the Web 2.0 environments. Folksonomies provide multiple benefits which make them a useful indexing method in various contexts; however, they also have a number of shortcomings that may hamper precise or exhaustive document retrieval. The position maintained is that folksonomies are a valuable addition to the traditional spectrum of knowledge organization methods since they facilitate user input, stimulate active language use and timeliness, create opportunities for processing large data sets, and allow new ways of social navigation within document collections. Applications of folksonomies as well as recommendations for effective information indexing and retrieval are discussed.


Author(s):  
Cesare Bartolini ◽  
Antonia Bertolino ◽  
Francesca Lonetti ◽  
Eda Marchetti

In this chapter, we provide an overview of recently proposed approaches and tools for functional and structural testing of SOA services. Typically, these two classes of approaches have been considered separately. However, since they focus on different perspectives, they are generally non-conflicting and could be used in a complementary way. Accordingly, we make an attempt at such a combination, briefly showing the approach and some preliminary results of the experimentation. The combined approach provides encouraging results from the point of view of the achievements and the degree of automation obtained. A very important concern in designing and developing web services is security. In the chapter we also discuss the security testing challenges and the currently proposed solutions.


Author(s):  
Diego Liberati

In many fields of research, as well as in everyday life, it often turns out that one has to face a huge amount of data, without an immediate grasp of an underlying simple structure, often existing. A typical example is the growing field of bio-informatics, where new technologies, like the so-called Micro-arrays, provide thousands of gene expressions data on a single cell in a simple and fast integrated way. On the other hand, the everyday consumer is involved in a process not so different from a logical point of view, when the data associated to his fidelity badge contribute to the large data base of many customers, whose underlying consuming trends are of interest to the distribution market. After collecting so many variables (say gene expressions, or goods) for so many records (say patients, or customers), possibly with the help of wrapping or warehousing approaches, in order to mediate among different repositories, the problem arise of reconstructing a synthetic mathematical model capturing the most important relations between variables. To this purpose, two critical problems must be solved: 1 To select the most salient variables, in order to reduce the dimensionality of the problem, thus simplifying the understanding of the solution 2 To extract underlying rules implying conjunctions and/or disjunctions between such variables, in order to have a first idea of their even non linear relations, as a first step to design a representative model, whose variables will be the selected ones When the candidate variables are selected, a mathematical model of the dynamics of the underlying generating framework is still to be produced. A first hypothesis of linearity may be investigated, usually being only a very rough approximation when the values of the variables are not close to the functioning point around which the linear approximation is computed. On the other hand, to build a non linear model is far from being easy: the structure of the non linearity needs to be a priori known, which is not usually the case. A typical approach consists in exploiting a priori knowledge to define a tentative structure, and then to refine and modify it on the training subset of data, finally retaining the structure that best fits a cross-validation on the testing subset of data. The problem is even more complex when the collected data exhibit hybrid dynamics, i.e. their evolution in time is a sequence of smooth behaviors and abrupt changes.


2019 ◽  
Vol 15 (S354) ◽  
pp. 46-52
Author(s):  
K. Nagaraju ◽  
K. Sankarasubramanian ◽  
K. E. Rangarajan

AbstractMeasurement of magnetic field in this layer is challenging both from point of view of observations and interpretation of the data. We present in this work about spectropolarimetric observations of a pore, simultaneously in Ca ii (CaIR) at 854.2 nm (CaIR) and H α (656.28 nm). The observed region includes a small scale energetic event (SSEE) taking place in the region between the pore and the region which show opposite polarity to that of pore at the photosphere. The energetic event appears to be a progressive reconnection event as shown by the time evolution of the intensity profiles. Closer examination of the intensity profiles from the downflow regions suggest that the height of formation of CaIR is higher than that of Hi α, contrary to the current understanding about their height of formation. Preliminary results on the inversion of Stokes-I and V profiles of CaIR are also presented.


2020 ◽  
Vol 6 (10) ◽  
pp. 110
Author(s):  
Francesco Lombardi ◽  
Simone Marinai

Nowadays, deep learning methods are employed in a broad range of research fields. The analysis and recognition of historical documents, as we survey in this work, is not an exception. Our study analyzes the papers published in the last few years on this topic from different perspectives: we first provide a pragmatic definition of historical documents from the point of view of the research in the area, then we look at the various sub-tasks addressed in this research. Guided by these tasks, we go through the different input-output relations that are expected from the used deep learning approaches and therefore we accordingly describe the most used models. We also discuss research datasets published in the field and their applications. This analysis shows that the latest research is a leap forward since it is not the simple use of recently proposed algorithms to previous problems, but novel tasks and novel applications of state of the art methods are now considered. Rather than just providing a conclusive picture of the current research in the topic we lastly suggest some potential future trends that can represent a stimulus for innovative research directions.


Sign in / Sign up

Export Citation Format

Share Document