scholarly journals How participative is open source hardware? Insights from online repository mining

2018 ◽  
Vol 4 ◽  
Author(s):  
Jérémy Bonvoisin ◽  
Tom Buchert ◽  
Maurice Preidel ◽  
Rainer G. Stark

Open Source Hardware (OSH) is an increasingly viable approach to intellectual property management extending the principles of Open Source Software (OSS) to the domain of physical products. These principles support the development of products in transparent processes allowing the participation of any interested person. While increasing numbers of products have been released as OSH, little is known on the prevalence of participative development practices in this emerging field. It remains unclear to which extent the transparent and participatory processes known from software reached hardware product development. To fill this gap, this paper applies repository mining techniques to investigate the transparency and workload distribution of 105 OSH product development projects. The results highlight a certain heterogeneity of practices filling a continuum between public and private development settings. They reveal different organizational patterns with different levels of centralization and distribution. Nonetheless, they clearly indicate the expansion of the open source development model from software into the realms of physical products and provide the first large-scale empirical evidence of this recent evolution. Therewith, this article gives body to an emerging phenomenon and contributes to give it a place in the scientific debate. It delivers categories to delineate practices, techniques to investigate them in further detail as well as a large dataset of exemplary OSH projects. The discussion of first results signposts avenues for a stream of research aiming at understanding stakeholder interactions at work in new product innovation practices in order to enable institutions and industry in providing appropriate responses.

2020 ◽  
Vol 1 ◽  
pp. 997-1006
Author(s):  
R. Mies ◽  
J. Bonvoisin ◽  
R. Stark

AbstractOpen source hardware is hardware whose design is shared online so that anyone can study, modify, distribute, make, and sell it. In spite of the increasing popularity of this alternative IP management approach, the field of OSH remains fragmented of diverse practices seeking for settlement. This challenges providers of groupware solutions to capture the specific needs of open source product development practitioners. This contribution therefore delivers a list of basic requirements and verifies them by comparing offered functions of existing groupware solutions.


Author(s):  
Pavel Katunin ◽  
Jianbo Zhou ◽  
Ola M. Shehata ◽  
Andrew A. Peden ◽  
Ashley Cadby ◽  
...  

Modern data analysis methods, such as optimization algorithms or deep learning have been successfully applied to a number of biotechnological and medical questions. For these methods to be efficient, a large number of high-quality and reproducible experiments needs to be conducted, requiring a high degree of automation. Here, we present an open-source hardware and low-cost framework that allows for automatic high-throughput generation of large amounts of cell biology data. Our design consists of an epifluorescent microscope with automated XY stage for moving a multiwell plate containing cells and a perfusion manifold allowing programmed application of up to eight different solutions. Our system is very flexible and can be adapted easily for individual experimental needs. To demonstrate the utility of the system, we have used it to perform high-throughput Ca2+ imaging and large-scale fluorescent labeling experiments.


2020 ◽  
Author(s):  
Demetres Kostas ◽  
Frank Rudzicz

AbstractWe propose an open-source Python library, called DN3, designed to accelerate deep learning (DL) analysis with encephalographic data. This library focuses on making experimentation rapid and reproducible and facilitates the integration of both public and private datasets. Furthermore, DN3 is designed in the interest of validating DL processes that include, but are not limited to, classification and regression across many datasets to prove capacity for generalization. We explore the effectiveness of this library by presenting a general scheme for person disambiguation called T-Vectors inspired by speech recognition. These are single vectors created by typically short, though arbitrary in length, electro-encephalographic (EEG) data sequences that uniquely identify users relative to others. T-Vectors were trained by classifying nearly 1000 people using as little as 1 second-long sequences and generalize effectively to users never seen during training. Generalized performance is demonstrated on two commonly used and publicly accessible motor imagery task datasets, which are notorious for intra- and inter-subject signal variability. According to these datasets, subjects can be identified with accuracies as high as 97.7% by simply adopting the label of the nearest neighbouring T-Vectors, with no dependence on task performed and little dependence on recording session, even when sessions are separated by days. Visualization of the T-Vectors from both datasets show no conflation of subjects between datasets, and indicates a T-Vector manifold where subjects cluster well. We first conclude that this is a desirable paradigm shift in EEG-based biometrics and secondly that this manifold deserves further investigation. Our proposed library provides a variety of essential tools that facilitated the development of T-Vectors. The T-vectors codebase serves as a template for future projects using DN3, and we encourage leveraging our provided model for future work.Author summaryWe present a new Python library to train deep learning (DL) models with brain data. This library is tailored, but not limited, to developing neural networks for brain-computer-interfaces (BCI) applications. There is abundant interest in leveraging DL in the wider neuroscience community, but we have found current solutions limiting. Furthermore both BCI and DL benefit from benchmarking against multiple datasets and sharing parameters. Our library tries to be accessible to DL novices, yet not limiting to experts, while making experiment configurations more easily shareable and flexible for benchmarking. We demonstrated many of the features of our library by developing a deep neural network capable of disambiguating people from arbitrary lengths of electroencephalography data. We identify a variety of future avenues of study for these representations produced by our network, particularly in biometric applications and addressing the variation in BCI classifier performance. We share our model, library and its associated guides and documentation with the community at large.


Author(s):  
Zhuoxuan Li ◽  
Warren Seering

AbstractAnalyzing value creation and capture mechanisms of open source hardware startup companies, this paper illustrates how an open source strategy can make economical sense for hardware startups. By interviewing 37 open source hardware company leaders, 12 company community members as well as analyzing forum data of 3 open source hardware companies; we realize that by open sourcing the design of hardware, a company can naturally establish its community, which is a key element for a company's success. Establishing a community can increase customer perceived value, decrease product development and sales cost, shorten product go-to-market time, and incubate startups with knowledge, experience and resources. These advantages can compensate for the risks associated with open source strategies and can make open source design a viable product development strategy for hardware startups.


Author(s):  
Georgi Derluguian

The author develops ideas about the origin of social inequality during the evolution of human societies and reflects on the possibilities of its overcoming. What makes human beings different from other primates is a high level of egalitarianism and altruism, which contributed to more successful adaptability of human collectives at early stages of the development of society. The transition to agriculture, coupled with substantially increasing population density, was marked by the emergence and institutionalisation of social inequality based on the inequality of tangible assets and symbolic wealth. Then, new institutions of warfare came into existence, and they were aimed at conquering and enslaving the neighbours engaged in productive labour. While exercising control over nature, people also established and strengthened their power over other people. Chiefdom as a new type of polity came into being. Elementary forms of power (political, economic and ideological) served as a basis for the formation of early states. The societies in those states were characterised by social inequality and cruelties, including slavery, mass violence and numerous victims. Nowadays, the old elementary forms of power that are inherent in personalistic chiefdom are still functioning along with modern institutions of public and private bureaucracy. This constitutes the key contradiction of our time, which is the juxtaposition of individual despotic power and public infrastructural one. However, society is evolving towards an ever more efficient combination of social initiatives with the sustainability and viability of large-scale organisations.


Author(s):  
Passakorn PHANNACHITTA ◽  
Akinori IHARA ◽  
Pijak JIRAPIWONG ◽  
Masao OHIRA ◽  
Ken-ichi MATSUMOTO

2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


2018 ◽  
Author(s):  
Matthias May ◽  
Kira Rehfeld

Greenhouse gas emissions must be cut to limit global warming to 1.5-2C above preindustrial levels. Yet the rate of decarbonisation is currently too low to achieve this. Policy-relevant scenarios therefore rely on the permanent removal of CO<sub>2</sub> from the atmosphere. However, none of the envisaged technologies has demonstrated scalability to the decarbonization targets for the year 2050. In this analysis, we show that artificial photosynthesis for CO<sub>2</sub> reduction may deliver an efficient large-scale carbon sink. This technology is mainly developed towards solar fuels and its potential for negative emissions has been largely overlooked. With high efficiency and low sensitivity to high temperature and illumination conditions, it could, if developed towards a mature technology, present a viable approach to fill the gap in the negative emissions budget.<br>


2018 ◽  
Author(s):  
Matthias May ◽  
Kira Rehfeld

Greenhouse gas emissions must be cut to limit global warming to 1.5-2C above preindustrial levels. Yet the rate of decarbonisation is currently too low to achieve this. Policy-relevant scenarios therefore rely on the permanent removal of CO<sub>2</sub> from the atmosphere. However, none of the envisaged technologies has demonstrated scalability to the decarbonization targets for the year 2050. In this analysis, we show that artificial photosynthesis for CO<sub>2</sub> reduction may deliver an efficient large-scale carbon sink. This technology is mainly developed towards solar fuels and its potential for negative emissions has been largely overlooked. With high efficiency and low sensitivity to high temperature and illumination conditions, it could, if developed towards a mature technology, present a viable approach to fill the gap in the negative emissions budget.<br>


Sign in / Sign up

Export Citation Format

Share Document