LiDAR Packaging

Author(s):  
Tim Nguyen

The rising demand for the “eyes to the machine” or 3D point cloud images in a wide set of application areas, such as automotive, trucking, UAV/drones, industrial, mapping, military and defense, surveillance, and others, is expected to exponentially drive the LiDAR market over the next two decades. The massive automotive LiDAR market is at its nascence stage. Wallstreet Research projects a $2 billion market by 2020 and grow to a staggering $82B by 2035. The Total Available Market (TAM) of LiDAR sensors is expected to grow from hundreds of thousands to millions and hundreds of millions through this period. With the market growth in demand and application specific requirements, the design, manufacturing and proxy of performance must revolutionize to deliver performance, quality, reliability, scale and cost. LiDAR optical design, integration, and miniaturization require novel packaging solutions, process and advanced tool development and hands-on lessons. Packaging and validation of performance requires a multidisciplinary approach that spans across the optical, electrical, mechanical and computer science domains. Although LiDAR has been around since the 1960s, the industry is beginning to learn the challenges in semiconductor packaging, interconnect substrate technologies, optoelectronic module technology, inter-assembly, and manufacturing multichip modules to meet the needs of the autonomous machine market.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Timothy Alan Ringo

Purpose This paper aims to outline an emerging trend that is replacing traditional retirement; this trend is called “protirement.” Protirement is defined as early retirement from professional work with the positive idea of pursuing something more fulfilling and has originated from the “blend of pro- and retirement.” Design/methodology/approach This paper’s approach is to define the trend of protirement and then back the idea with data and cases of where and how this is implemented by human resources (HR) organizations. Findings Retirement, in its traditional sense, is becoming increasingly unattainable for individuals but is also less necessary than it has been in the past. People are living longer and healthier lives, and in fact, data show that working in some meaningful and valuable manner actually increases life-span and allows more time to save for the day when one cannot work anymore. Research limitations/implications The findings in this paper should spark others to do more research into the area of aging workforce and new models that will leverage senior workers for the benefit of individuals, organizations and society at large. Practical implications HR executives and their organizations will need to drive change in the areas of recruitment (senior workers), pension planning and saving and HR policies around retirement. Social implications People productivity has been in decline for over 10 years now. The authors are going to need all hands on deck to help fix this and overcome the economic challenges created by the 2020 pandemic. Leveraging senior workers brings deep expertise into the workplace that could be lost otherwise, improving productivity and organization learning. Originality/value This paper takes an idea coined in the 1960s and brings it into the 21st century, when and where it is really needed. This long-forgotten idea is being resurrected to help deal with today’s workplace challenges.


2012 ◽  
Author(s):  
Francis Berghmans ◽  
Thomas Geernaert ◽  
Marek Napierała ◽  
Tigran Baghdasaryan ◽  
Camille Sonnenfeld ◽  
...  

2021 ◽  
Vol 40 (2) ◽  
pp. 1-19
Author(s):  
Ethan Tseng ◽  
Ali Mosleh ◽  
Fahim Mannan ◽  
Karl St-Arnaud ◽  
Avinash Sharma ◽  
...  

Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested.


Author(s):  
Biswanath Samanta

This paper reports the development of an introductory mechatronics course in Mechanical Engineering (ME) undergraduate program at Georgia Southern University. This an updated version of an existing required course in the ABET accredited BSME program. The course covers three broad areas: mechatronic instrumentation, computer based data acquisition and analysis, and microcontroller programming and interfacing. This is a required 3-credit course in the ME program with updated computing application specific content reinforcing theoretical foundation with hands-on learning activities of the existing course. The course has four contact hours per week with two hours of lecture and two hours of interactive session of problem solving and laboratory experiment. For each topic covered, students get the theoretical background and the hands-on experience in the laboratory setting. Both formative and summative assessment of the students’ performance in the course are planned. Both direct and indirect forms of assessment are considered. The paper reports the details of the course materials.


2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Manuel Hensmans

Purpose An essential corporate decision-making tool, the Boston Consulting Group's growth-share matrix, is due for an upgrade. The purpose of this paper is to upgrade this growth matrix for use by corporate managers in the current platform age. Designed in the conglomerate age of the 1960s and 1970s to help corporate managers make disciplined and systematic portfolio investment decisions, the matrix is ill-adapted to the platform age in which we now live. The most valuable companies in the world are now platform companies, and many companies are transitioning to a more platform-based corporate portfolio. In this paper, the author explains how corporate managers can build and execute a sustainable platform portfolio. Design/methodology/approach The author started with a thorough study of the contextual assumptions and theoretical background of the original Boston Consulting Group growth-share matrix (which the author has been teaching for the past decade). He contrasted these with the assumptions and theoretical background developed in the platform strategy literature. To test and refine the framework, the author presented and discussed its applicability at companies such as GSK and with local consultants. He then used five consecutive cohorts of master students [280 students (70 groups)] to test this framework on a total of 20 companies (both “born platform” and “product to platform” companies). Findings The platform ecosystem age requires a corporate decision-making matrix that discriminates between businesses on the basis of platform market growth and platform commercialization capability, rather than product market growth and market share. As in the original matrix, these businesses correspond to three different investment horizons (Figure 1): the continuous renewal of blockbuster business, the integration of emerging killer businesses and the experimentation with joint innovation businesses. This paper helps corporate managers build and execute a sustainable platform portfolio by means of a sequence of six decision-making steps and a clear organizational template for successful execution. Originality/value The portfolio matrix, decision-making sequence and organizational execution advice presented in this paper are fit for both “born platform” companies such as Google (Alphabet) and “product to platform” hybrids such as Lego. The paper illustrates this with practical examples for both types of companies.


Author(s):  
Timothy C. Scott ◽  
Robert J. Ribando

During the 1960s, major revisions took place in undergraduate thermo/fluids (thermodynamics, fluid mechanics and heat transfer) textbooks and in the pedagogy used to teach these disciplines. In the decades since, students and instructors have both changed. Many students arrive with less-than-adequate mathematics and study skills, rely almost exclusively on the Internet for reference materials, and have very little "hands on" knowledge of how things work. The number of instructors with practical expertise or industrial experience has decreased markedly as well. Yet the methods by which material is presented and the tools and resources students are exposed to have not changed sufficiently. In contrast, the tools available in industry have improved significantly and the knowledge needed by graduates to use these tools has not kept pace. This paper looks at how thermo/fluids has evolved over the past five decades and points out some areas that are not receiving sufficient attention. These include the use of computers as teaching aids, the training of students in the software packages prevalent in modern industry, and the need to update the database of design information. The almost exclusive use of the Internet and other non-refereed sources of information by students is also a significant problem that needs addressing.


2007 ◽  
Vol 32 (2) ◽  
pp. 36-39 ◽  
Author(s):  
Doro Boehme

This article describes the Joan Flasch Artists’ Book Collection, a unique archive of experimental art forms from the 1960s to the present, and how students and the general public make use of its welcoming access policies. It further addresses conservation efforts as they apply to a setting that strongly emphasizes hands-on usage of materials.


2020 ◽  
Vol 1 (1) ◽  
pp. 6
Author(s):  
Rachel Kemp ◽  
Alexander Chippendale ◽  
Monica Harrelson ◽  
Jennifer Shumway ◽  
Amanda Tan ◽  
...  

Optics is an important subfield of physics required for instrument design and used in a variety of other disciplines, especially subjects that intersect the life sciences. Students from a variety of disciplines and backgrounds should be educated in the basics of optics to train the next generation of interdisciplinary researchers and instrumentalists who will push the boundaries of discovery or create inexpensive optical-based diagnostics. We present an experimental curriculum developed to teach students the basics of geometric optics, including ray and wave optics. The students learn these concepts in an active, hands-on manner through designing, building, and testing a homebuilt light microscope made from component parts. We describe the experimental equipment and basic measurements students can perform to learn these basic optical principles focusing on good optical design techniques, testing, troubleshooting, and iterative design. Students are also exposed to fundamental concepts of measurement uncertainty inherent in all experimental systems. The project students build is open and versatile to allow advanced projects, such as epifluorescence. We also describe how the equipment and curriculum can be flexibly used for an undergraduate level optics course, an advanced laboratory course, and graduate-level training modules or short courses.


2013 ◽  
Vol 333-335 ◽  
pp. 742-748
Author(s):  
Yu Long Wan ◽  
Zhi Gang Wu ◽  
Ruo Hua Zhou ◽  
Yong Hong Yan

Over the last decade many sophisticated and application-specific methods have been proposed for transcription of polyphonic music. However, the performance seems to have reached a limit. This paper describes a high-performance piano transcription system with two main contributions. Firstly, a new onset detection method is proposed using a specific energy envelope matched filter, which has been proved very suitable for piano music. Secondly, a computer-vision method is proposed to enhance audio-only piano music transcription, using the recognition of the player's hands on the piano keyboard. We carried out comparable experiments respectively for onset detection and overall system based on the MAPS database and the video database. The results were compared with the best piano transcription system in MIREX 2008, which still kept the best performance in piano subset till MIREX 2012. The results show that the system outperforms the state-of-art method substantially.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Shane Colburn ◽  
Arka Majumdar

AbstractUltrathin meta-optics offer unmatched, multifunctional control of light. Next-generation optical technologies, however, demand unprecedented performance. This will likely require design algorithms surpassing the capability of human intuition. For the adjoint method, this requires explicitly deriving gradients, which is sometimes challenging for certain photonics problems. Existing techniques also comprise a patchwork of application-specific algorithms, each focused in scope and scatterer type. Here, we leverage algorithmic differentiation as used in artificial neural networks, treating photonic design parameters as trainable weights, optical sources as inputs, and encapsulating device performance in the loss function. By solving a complex, degenerate eigenproblem and formulating rigorous coupled-wave analysis as a computational graph, we support both arbitrary, parameterized scatterers and topology optimization. With iteration times below the cost of two forward simulations typical of adjoint methods, we generate multilayer, multifunctional, and aperiodic meta-optics. As an open-source platform adaptable to other algorithms and problems, we enable fast and flexible meta-optical design.


Sign in / Sign up

Export Citation Format

Share Document