A Structural Model for the Design and Implementation of Open Innovation

2011 ◽  
pp. 68-81
Author(s):  
Matthew C. Heim

With the recent developments of open innovation as a formal management discipline, many organizations today are struggling to form effective internal competencies that can be leveraged to generate measured success. Many companies are using a trial-and-error approach that too often leads to unnecessary cost overruns and even failure. The model presented in this chapter provides readers with a simple, yet elegant structure necessary for the design and implementation of a successful open innovation program. The chapter explores the leading causes of failure in a new open innovation program, and offers guidelines and criteria that open innovation leaders and practitioners can use to avoid these pitfalls, and to establish a program that generates tangible returns, while motivating participants to achieve more desirable innovative behaviors.

Author(s):  
Don D. Winfree

Abstract Windage losses in gearboxes account for a large portion of the total power loss in high-speed drive trains. Very little actual data has been collected specifically quantifying these losses. Traditional techniques to measure the effects of baffles in high speed gearing applications have been done by trial and error on very complex systems. This trial and error technique is used throughout the gearing industry to solve problems without isolating each individual gear windage effect. These solutions are usually sub-optimum. They cause time-consuming delays and cost overruns in many programs. This paper describes a gear baffle test rig that was built to quantify and minimize the gear windage losses in high-speed drive trains. These tests were conducted at the Lockheed Martin Aeronautics Company, Fort Worth Texas Facility. The intent of the gearbox baffle test rig was to isolate and measure the windage effects on a single high-speed bevel gear with various baffle configurations. Results of these tests were used to define a basic set of ground rules for designing baffles. Finally the set of ground rules was used to design an optimum baffle configuration.


2009 ◽  
Vol 5 (H15) ◽  
pp. 217-217 ◽  
Author(s):  
Aleksander Brzeziński ◽  
Nicole Capitaine

AbstractThe axial component of Earth rotation, which is conventionally expressed by Universal Time (UT1), contains small physical signals with diurnal and subdiurnal periods. This part of the spectrum is dominated by the tidal effects which are regular and predictable. The largest components express the influence of the gravitationally forced ocean tides with diurnal and semidiurnal periods and amplitudes up to 0.02 milliseconds (ms) in UT1 corresponding to an angular displacement of 0.30 milliarcseconds (mas); see Table 8.3 of the IERS Conventions (IERS, 2003). There are also smaller subdiurnal components (amplitudes up to 0.03 mas), designated as “spin libration” by Chao et al. (1991), due to direct influence of the tidal gravitation on those features of the Earth's density distribution which are expressed by the non-zonal terms of the geopotential. These components are not included in the models recommended by the IERS Conventions, in contrast to the corresponding effect in polar motion (ibid., Table 5.1).Here we consider in detail the subdiurnal libration in UT1. We derive an analytical solution for the structural model of the Earth consisting of an elastic mantle and a liquid core which are not coupled to each other. The reference solution for the rigid Earth is computed by using the satellite-determined coefficients of geopotential and the recent developments of the tide generating potential (TGP). We arrived to the conclusion that the set of terms with amplitudes exceeding the truncation level of 0.005 mas consists of 11 semidiurnal harmonics due to the influence of the TGP term u22 on the equatorial flattening of the Earth expressed by the Stokes coefficients C22, S22. There is an excellent agreement between our estimates for the rigid Earth and the amplitudes derived by Wünsch (1991). The only important difference is the term with the tidal code ν2, which seems to be overlooked in the development of Wünsch. Our amplitudes computed for an elastic Earth with liquid core appear to be in reasonable agreement with those derived by Chao et al. (1991), but the latter model was not complete. The estimated effect is superimposed on the ocean tide influences having the same frequencies but 9 to 11 times larger amplitudes. Nevertheless, its maximum peak-to-peak size is about 0.105 mas, hence definitely above the current uncertainty of UT1 determinations. Comparison with the corresponding model of prograde diurnal polar motion associated with the Earth's triaxiality (IERS Conventions, Table 5.1) shows that: 1) the two effects are of similar size, 2) there is consistency between the underlying dynamical models, parameters employed, etc. In conclusion, we recommend adding the model developed here to the set of procedures provided by the IERS Conventions.


Author(s):  
Linlin Zhao ◽  
Huirong Zhang

Project complexity is usually considered as one of main causes of cost overruns, resulting in poor performance and thus project failure. However, empirical studies focused on evaluating its effects on project cost remain lacking. Given this circumstance, this study attempts to develop the relationships between project cost and the multidimensional project complexity elements. We establish complexity as a multidimensional factor including the task, organization, market, legal, and environment complexities. This study uses an empirical evidence-based structural model to account for the relationships between project cost and project complexity. By doing so, a quantitative assessment of multi-dimensional project complexity has been developed. The findings suggest that task and organization complexities have direct effects on project cost, while market, legal and external environment complexities have indirect effects on project cost. The practical contribution is that the findings can improve the understanding of which dimension of complexity significantly influence project cost, and the need to focus efforts on strategically addressing that complexities.


Author(s):  
Dennis J. L. Siedlak ◽  
Paul R. Schlais ◽  
Olivia J. Pinon ◽  
Dimitri N. Mavris

In the years since the Cold War, the aerospace industry has seen a shift towards affordability-based design from the primarily performance-based designs of the past era. While many techniques, such as IPPD and PLM, have been implemented in support of this shift, recent developments in the industry have led to major cost overruns and production delays. The increased prevalence of demand variability in the aerospace industry and the difficulty to rapidly adapt production plans are a primary cause of these issues. Furthermore, traditional aircraft designers perform detailed manufacturing cost analysis late in the design process when the majority of the program costs are already committed. With the recent shift to more composite aerostructures, historical regressions and cost estimating relationships used to predict cost and manufacturability are no longer accurate, so postponing more detailed cost analyses to later design phases can lead to high costs due to sub-optimal early design decisions. The methodology presented in this paper addresses these problems by providing the ability to conduct multi-disciplinary trades in the early stages of design, when a large amount of design freedom and cost savings opportunities exist. To enable these multi-disciplinary trades, this paper describes how aircraft performance considerations are integrated with production rate, manufacturing cost, and financial planning metrics into a parametric, visual trade-off environment. The environment, combined with a multi-objective optimization routine, facilitate effective affordability-based trades during the early stages of design. An F-86 Sabre redesigned wingbox using 3 separate manufacturing concepts is used as a proof-of-concept for this research.


Author(s):  
Danielle Logue

This chapter considers the historical changes that have occurred in the way corporations engage in innovation, conceptualizations of disruptive innovation, and the consequences of recent developments in technology, models and movements for the corporate form (particularly boundaries), practices, and leadership. It discusses how the notion of disruption innovation has developed, and summarizes the main innovation dichotomies that have emerged from years of academic research on how corporations innovate. It then focuses on the implications of open innovation and business model innovation for the corporation, and details current responses of corporations to disruptive innovation. The chapter concludes with a consideration of how disruptive innovations are impacting the role and significance of the corporation in modern society.


2009 ◽  
Vol 16-19 ◽  
pp. 358-362
Author(s):  
Xiang Tong Yan ◽  
Ping Yu Jiang

For the complexity and miniaturization of MEMS, MEMS design process is extremely different from general mechanical design process. Commonly the design flow is dominated by “trial-and-error” methods. It is necessary to describe design process of MEMS for reuse. This will help to reduce the number of redesigns and thus decrease the time to market. Based on XML and necessary extensions being made, a MEMS design process description language, called MDPDL, is presented and the syntax and implementation are discussed in detail. The Document Type Definition (DTD) is adopted to define the syntax rules for the tags of MDPDL, considering core activities and related resources of MEMS design process. Finally, an implementation case has been given out to illustrate the efficiency of MDPDL.


Author(s):  
Don D. Winfree

Windage losses in gearboxes account for a large portion of the total power loss in high-speed drive trains. Very little actual data has been collected specifically quantifying these losses. Traditional techniques to measure the effects of baffles in high speed gearing applications have been done by trial and error on very complex systems. This trial and error technique is used throughout the gearing industry to solve problems without isolating each individual gear windage effect. These solutions are usually sub-optimum. They cause time-consuming delays and cost overruns in many programs. This paper describes two gear baffle test rigs that were built to quantify and minimize the gear windage losses in high-speed drive trains. The intent of the first gearbox baffle test rig was to isolate and measure the windage effects on a single high-speed bevel gear with various baffle configurations. The results of these tests were used to define a basic set of ground rules for designing baffles. This set of ground rules was then applied to another rig replicating the F-35 Liftfan gear box configuration. Immediate benefits were seen. Without this work Lockheed Martin’s X-35 STOVL aircraft would not have been able to operate.


1939 ◽  
Vol 141 (1) ◽  
pp. 535-547 ◽  
Author(s):  
S. J. Davies

The developments since the author's paper to the Institution in 1932 are reviewed under the subdivisions: combustion chamber design; injection systems; fuels. While few novel features can be reported under combustion chamber design, the growth of knowledge concerning injection systems and the properties of fuels has been considerable. The processes in fuel-injection systems are now well understood, while the conditions governing the break-up of fuel sprays are now also reasonably clear. The fundamentals of combustion and what determines ignition lag still remain obscure; the interaction of break-up of spray and course of combustion are also indefinite, and trial-and-error methods must still be employed. On the other hand, the cetane scale for assessing the suitability of fuels has been introduced. Reference is made to electrical indicators and the determination of impact loading on running gear during combustion. Two-stroke working and supercharging are discussed.


2019 ◽  
Vol 92 (1101) ◽  
pp. 20190093 ◽  
Author(s):  
Leo P. Sugrue ◽  
Rahul S. Desikan

What is the future of neuroradiology in the era of precision medicine? As with any big change, this transformation in medicine presents both challenges and opportunities, and to flourish in this new environment we will have to adapt. It is difficult to predict exactly how neuroradiology will evolve in this shifting landscape, but there will be changes in both what we image and what we do. In terms of imaging, we will need to move beyond simply imaging brain anatomy and toward imaging function, both at the molecular and circuit level. In terms of what we do, we will need to move from the periphery of the clinical enterprise toward its center, with a new emphasis on integrating imaging with genetic and clinical data to form a comprehensive picture of the patient that can be used to direct further testing and care. The payoff is that these changes will align neuroradiology with the emerging field of precision psychiatry, which promises to replace symptom-based diagnosis and trial-and-error treatment of psychiatric disorders with diagnoses based on quantifiable genetic, imaging, physiologic, and behavioural criteria and therapies targeted to the particular pathophysiology of individual patients. Here we review some of the recent developments in behavioural genetics and neuroscience that are laying the foundation for precision psychiatry. By no means comprehensive, our goal is to introduce some of the perspectives and techniques that are likely to be relevant to the precision neuroradiologist of the future.


2012 ◽  
Vol 2012 ◽  
pp. 1-8
Author(s):  
Ismail Yusuf ◽  
Yusram Yusuf ◽  
Nur Iksan

This paper investigates the use of genetic algorithms (GA) in the design and implementation of fuzzy logic controllers (FLC) for incubating egg. What is the best to determine the membership function is the first question that has been tackled. Thus it is important to select the accurate membership functions, but these methods possess one common weakness where conventional FLC use membership function generated by human operators. The membership function selection process is done with trial and error, and it runs step by step which takes too long in solving the problem. This paper develops a system that may help users to determine the membership function of FLC using the GA optimization for the fastest processing in solving the problems. The data collection is based on the simulation results, and the results refer to the transient response specification which is maximum overshoot. From the results presented, we will get a better and exact result; the value of overshot is decreasing from 1.2800 for FLC without GA to 1.0081 with GA (FGA).


Sign in / Sign up

Export Citation Format

Share Document