scholarly journals Characteristics of the innovation activities of firms in Europe: a critical review of international differences

2017 ◽  
Vol 17 (3) ◽  
pp. 239-262 ◽  
Author(s):  
Marek Vokoun

Abstract A sample of 18 papers and 32 data sets revealed 210,404 firm level observations about European firms making decisions about innovation. A total of 66,965 observations describe activities of innovators between 1986 and 2008. This paper used a basic literature review to assess properties of innovation among quite rare full CDM (Crépon, Duguet, and Mairesse) papers. This study compared results from two systems of estimation and showed that both international and regional comparisons are rather problematic because of different definitions of innovation variables and data set representativeness. On average, a typical firm that engaged in innovation was a large firm competing in international markets in the sample of firms with 20+ employees. Smaller firms, however, invested more in research and development (R&D) and no linear relationship was found for output characteristics. Cooperation on R&D projects increased overall innovation intensity. There is strong evidence that public funding had an ambiguous effect on R&D spending and no additional effect on innovation output on average. This output measured by sales from innovated goods and services was on average in a positive relationship with labour productivity; however, a detailed view suggested this effect was present only in product innovation. In this paper, it is shown that results of innovation studies cannot be compared or used in research without deeper analysis of the data sample (micro companies, industries, active firms, entrants etc.), dependent variable (innovator, R&D expenditures, sales, productivity, new product, new service etc.) and the baseline company that is defined by independent variables.

2017 ◽  
Vol 30 (4) ◽  
pp. 447-464 ◽  
Author(s):  
Dong Xiang ◽  
Andrew C. Worthington

Purpose This paper aims to examine the impact of government financial assistance provided to Australian small and medium-sized enterprises (SMEs). Design/methodology/approach This study uses firm-level panel data on more than 2,000 SMEs over a five-year period from the Business Longitudinal Database compiled by the Australian Bureau of Statistics. The authors measure the impact of government financial assistance in terms of subsequent SME performance (income from sales of goods and services and profitability) and changes in the availability of alternative nongovernment finance. Findings The authors find government financial assistance helps SMEs improve performance over and above the effects of conventional financing. They also find than the implicit guarantee effect signalled by a firm receiving government financial assistance suggests firms are more likely to obtain nongovernment finance in the future. Control factors that significantly affect SME performance and finance availability include business size, the level of innovation, business objectives and industry. Research limitations/implications Nearly all of the responses in the original survey data are qualitative, so we are unable to assess how the strength of these relationships varies by the levels of assistance, income and profitability. The measure of government financial assistance of the authors is also general in that it includes grants, subsidies and rebates from any Australian Government organisation, so we are unable to comment on the impact of individual federal, state or local government programmes. Practical implications Government financial assistance helps SMEs improve both immediate and future performance as measured by income and profitability. This could be because government financial assistance quickly overcomes the financial constraints endemic in SMEs. Government financial assistance also helps SMEs obtain nongovernment finance in the future. The authors conjecture that this is because it overcomes some of the information opaqueness of SMEs. Originality/value Few studies focus on the impact of direct government financial assistance compared with indirect assistance as typical in credit guarantee schemes. The authors use a very large and detailed data set on Australian SMEs to undertake the analysis.


Nativa ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 70 ◽  
Author(s):  
Luís Flávio Pereira ◽  
Ricardo Morato Fiúza Guimarães

Este trabalho teve como objetivo sugerir diretrizes para melhor mapear usos da terra usando o complemento Semi-automatic Classification Plugin (SCP) para QGIS, destacando-se quais os melhores conjuntos de dados, classificadores e estratégias amostrais para treinamento. Foram combinados quatro conjuntos de dados derivados de imagem Sentinel 2A, três classificadores disponíveis no SCP, e duas estratégias amostrais: amostras de treinamento (ROI’s) separadas ou dissolvidas em uma única amostra, obtendo-se 24 tratamentos. Os tratamentos foram avaliados quanto à acurácia (coeficiente Kappa), qualidade visual do mapa final e tempo de processamento. Os resultados mostraram que: (1) o SCP é adequado para mapear usos da terra; (2) quanto maior o conjunto de dados, melhor o desempenho do classificador; e (3) a utilização de ROI’s dissolvidas sempre diminui o tempo de processamento, mas apresenta efeito ambíguo sobre os diferentes classificadores. Para melhores resultados, recomenda-se a aplicação do classificador Maximum Likelihood sobre o maior conjunto de dados disponível, utilizando-se amostras de treinamento coletadas contemplando todas as variações intraclasse, e posteriormente dissolvidas em uma única ROI.Palavras-chave: sensoriamento remoto, amostras de treinamento, QGIS, Sentinel 2A,MAPPING LAND USES/COVERS WITH SEMI-AUTOMATIC CLASSIFICATION PLUGIN: WHICH DATA SET, CLASSIFIER AND SAMPLING DESIGN? ABSTRACT: This paper aimed to suggest guidelines to better map land uses using the Semi-automatic Classification Plugin (SCP) for QGIS, highlighting which the best data sets, classifiers and training sampling designs. Four data sets from a Sentinel 2A image were combined with three classifiers available in the SCP, and two sampling designs: separate or dissolved training samples (ROI's) in a single sample, obtaining 24 treatments. The treatments were evaluated regarding the accuracy (Kappa coefficient), visual quality of the final map and processing time. The results suggest that: (1) the SCP is suitable to map land uses; (2) the larger the data set, the better the classifier performance; and (3) the use of dissolved ROI always decreases processing time, but has an ambiguous effect on the different classifiers. In order to get better results, we recommend to apply the Maximum Likelihood classifier on the largest data set available, using training samples that cover all possible intraclass variations, subsequently dissolved in a single ROI.Keywords: remote sensing, training samples, QGIS, Sentinel 2A. 


2020 ◽  
Vol 74 ◽  
pp. 05028
Author(s):  
Marek Vokoun

The analysis aims at the different innovation activities of foreign-owned enterprises in the Czech economy. Data comes from the Czech Community Innovation Surveys of 2010, 2012, and 2014. This paper evaluates new-to-the-market innovation activities at the firm level. The analyzed sample consists of observations about innovators and companies that did not engage in new-to-the-market innovation activities in the last three years. This paper explores the relationship between public support and innovation activities of multinationals. The first results suggest public support (local government funds, national government funds, EU funds, EU Framework, and Horizon funds) is not always statistically significant in terms of R&D expenditures in comparison to unsupported firms. The additional contribution of public support for innovation output is again not always statistically significant. Results suggest that local government funds (grant projects) are beneficial for foreign-owned new-to-the-market innovators. Those local government funds are contributing both to innovation input (R&D expenditures) and innovation output (sales of innovated goods and services). Other public support variables indicate a crowding-out effect of private R&D&I investment. Globalization tendencies are supported by governments and future research should aim at a more complex analysis of multinationals’ behavior in this area.


2020 ◽  
Vol 25 (3) ◽  
pp. 465-481
Author(s):  
Poulomi Bhattacharya ◽  
Badri Narayan Rath

This article examines the impact of innovation on labour productivity by using latest World Bank Enterprise Surveys data and compares the results between Chinese and Indian manufacturing sector. The article uses cross-section data based on two surveys that were conducted by the World Bank in 2012 and 2014 for China and India, respectively. By employing simple ordinary least squares (OLS) regression technique, we find that innovation affects the labour productivity positively for Chinese as well as Indian manufacturing firms, but its impact on firm productivity is relatively weak in case of India as compared to China. Second, other factors such as average wage of the workers, education of production workers and training do significantly boost the labour productivity of Chinese manufacturing firms as well as for Indian firms. Third, our results based on firm size also indicate that the impact of innovation activities on labour productivity is higher in case of large firms as compared to medium firms. However, innovation does not affect the labour productivity of small manufacturing firms for both China and India. In terms of policy, it is important for both Chinese and Indian manufacturing firms to keep pursuing innovation activities, in order to spur productivity, which would strengthen firms’ growth.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2020 ◽  
Vol 34 (2) ◽  
pp. 109-124
Author(s):  
Megan F. Hess ◽  
Andrew M. Hess

SYNOPSIS In this study, we investigate the relation between accounting failure and innovation at multiple levels in an organization by developing and testing a model for how top executives and functional managers might change their risk preferences and their innovation investments in response to public disclosures of financial misconduct. At the firm level, we find that accounting failures reduce subsequent investments in R&D, as predicted by a threat rigidity (“play it safe”) psychological response among top executives. At the project level, accounting failures have the opposite effect, resulting in an increase in the number of exploratory projects, as predicted by a failure trap (“swing for the fences”) psychological response among functional managers. Unpacking this relation at multiple levels of analysis helps us to understand the complex ways in which financial misconduct shapes a firm's innovation activities and appreciate the far-reaching consequences of accounting failure.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


2013 ◽  
Vol 756-759 ◽  
pp. 3652-3658
Author(s):  
You Li Lu ◽  
Jun Luo

Under the study of Kernel Methods, this paper put forward two improved algorithm which called R-SVM & I-SVDD in order to cope with the imbalanced data sets in closed systems. R-SVM used K-means algorithm clustering space samples while I-SVDD improved the performance of original SVDD by imbalanced sample training. Experiment of two sets of system call data set shows that these two algorithms are more effectively and R-SVM has a lower complexity.


Sign in / Sign up

Export Citation Format

Share Document